You are viewing a plain text version of this content. The canonical link for it is here.
Posted to gitbox@hive.apache.org by GitBox <gi...@apache.org> on 2020/12/14 17:16:04 UTC

[GitHub] [hive] Noremac201 opened a new pull request #1777: HIVE-24470 - Separate HiveMetastore Thrift and Driver logic

Noremac201 opened a new pull request #1777:
URL: https://github.com/apache/hive/pull/1777


   Supersedes/Copy of #1740 with correct local branch
   
   # What changes were proposed in this pull request?
   Refactor HiveMetastore.HMSHandler into its own class
   Why are the changes needed?
   This will pave the way for cleaner changes since now we don't have the driver class nested with 10,000 line HMSHandler file so there is a clearer separation of duties.
   
   # Does this PR introduce any user-facing change?
   No
   
   # How was this patch tested?
   Existing unit tests, building/running manually
   Not additional tests were added since this was a pure refactoring


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] Noremac201 commented on pull request #1777: HIVE-24470 - Separate HiveMetastore Thrift and Driver logic

Posted by GitBox <gi...@apache.org>.
Noremac201 commented on pull request #1777:
URL: https://github.com/apache/hive/pull/1777#issuecomment-744585054


   @miklosgergely @belugabehr 
   
   FYI


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] Noremac201 commented on pull request #1777: HIVE-24470 - Separate HiveMetastore Thrift and Driver logic

Posted by GitBox <gi...@apache.org>.
Noremac201 commented on pull request #1777:
URL: https://github.com/apache/hive/pull/1777#issuecomment-749093231


   Closed in favor of #1787 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] dataproc-metastore commented on a change in pull request #1777: HIVE-24470 - Separate HiveMetastore Thrift and Driver logic

Posted by GitBox <gi...@apache.org>.
dataproc-metastore commented on a change in pull request #1777:
URL: https://github.com/apache/hive/pull/1777#discussion_r546209366



##########
File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
##########
@@ -0,0 +1,10137 @@
+package org.apache.hadoop.hive.metastore;
+

Review comment:
       Please add license header.

##########
File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
##########
@@ -0,0 +1,10137 @@
+package org.apache.hadoop.hive.metastore;
+
+import com.codahale.metrics.Counter;
+import com.codahale.metrics.Timer;
+import com.facebook.fb303.FacebookBase;
+import com.facebook.fb303.fb_status;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Splitter;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.common.collect.Lists;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.AcidConstants;
+import org.apache.hadoop.hive.common.AcidMetaDataFile;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.common.TableName;
+import org.apache.hadoop.hive.common.ValidReaderWriteIdList;
+import org.apache.hadoop.hive.common.ValidWriteIdList;
+import org.apache.hadoop.hive.common.repl.ReplConst;
+import org.apache.hadoop.hive.metastore.api.*;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.events.AbortTxnEvent;
+import org.apache.hadoop.hive.metastore.events.AcidWriteEvent;
+import org.apache.hadoop.hive.metastore.events.AddCheckConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddDefaultConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddForeignKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddNotNullConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AddPrimaryKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AddUniqueConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AllocWriteIdEvent;
+import org.apache.hadoop.hive.metastore.events.AlterCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.AlterDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.AlterISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.AlterPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterTableEvent;
+import org.apache.hadoop.hive.metastore.events.CommitTxnEvent;
+import org.apache.hadoop.hive.metastore.events.ConfigChangeEvent;
+import org.apache.hadoop.hive.metastore.events.CreateCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.CreateDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.CreateFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.CreateISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.CreateTableEvent;
+import org.apache.hadoop.hive.metastore.events.DeletePartitionColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DeleteTableColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DropCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.DropConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.DropDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.DropFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.DropISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.DropPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.DropSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.DropTableEvent;
+import org.apache.hadoop.hive.metastore.events.InsertEvent;
+import org.apache.hadoop.hive.metastore.events.LoadPartitionDoneEvent;
+import org.apache.hadoop.hive.metastore.events.OpenTxnEvent;
+import org.apache.hadoop.hive.metastore.events.PreAddPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAddSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreAuthorizationCallEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreEventContext;
+import org.apache.hadoop.hive.metastore.events.PreLoadPartitionDoneEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadhSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.UpdatePartitionColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.UpdateTableColumnStatEvent;
+import org.apache.hadoop.hive.metastore.messaging.EventMessage;
+import org.apache.hadoop.hive.metastore.metrics.Metrics;
+import org.apache.hadoop.hive.metastore.metrics.MetricsConstants;
+import org.apache.hadoop.hive.metastore.metrics.PerfLogger;
+import org.apache.hadoop.hive.metastore.partition.spec.PartitionSpecProxy;
+import org.apache.hadoop.hive.metastore.txn.CompactionInfo;
+import org.apache.hadoop.hive.metastore.txn.TxnStore;
+import org.apache.hadoop.hive.metastore.txn.TxnUtils;
+import org.apache.hadoop.hive.metastore.utils.FileUtils;
+import org.apache.hadoop.hive.metastore.utils.FilterUtils;
+import org.apache.hadoop.hive.metastore.utils.HdfsUtils;
+import org.apache.hadoop.hive.metastore.utils.JavaUtils;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hadoop.hive.metastore.utils.MetastoreVersionInfo;
+import org.apache.hadoop.hive.metastore.utils.SecurityUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.thrift.TException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.jdo.JDOException;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Modifier;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.nio.ByteBuffer;
+import java.security.PrivilegedExceptionAction;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.regex.Pattern;
+
+import static org.apache.commons.lang3.StringUtils.isBlank;
+import static org.apache.commons.lang3.StringUtils.join;
+import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_CATALOG_NAME;
+import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_COMMENT;
+import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
+import static org.apache.hadoop.hive.metastore.Warehouse.getCatalogQualifiedTableName;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.CAT_NAME;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.DB_NAME;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.getDefaultCatalog;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.parseDbName;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.prependCatalogToDbName;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.prependNotNullCatToDbName;
+
+public class HMSHandler extends FacebookBase implements IHMSHandler {
+  public static final Logger LOG = HiveMetaStore.LOG;
+  private final Configuration conf; // stores datastore (jpox) properties,
+  // right now they come from jpox.properties
+
+  // Flag to control that always threads are initialized only once
+  // instead of multiple times
+  private final static AtomicBoolean alwaysThreadsInitialized =
+      new AtomicBoolean(false);
+
+  private static String currentUrl;
+  private FileMetadataManager fileMetadataManager;
+  private PartitionExpressionProxy expressionProxy;
+  private StorageSchemaReader storageSchemaReader;
+  private IMetaStoreMetadataTransformer transformer;
+
+  // Variables for metrics
+  // Package visible so that HMSMetricsListener can see them.
+  static AtomicInteger databaseCount, tableCount, partCount;
+
+  private Warehouse wh; // hdfs warehouse
+  private static final ThreadLocal<RawStore> threadLocalMS =
+      new ThreadLocal<RawStore>() {
+        @Override
+        protected RawStore initialValue() {
+          return null;
+        }
+      };
+
+  private static final ThreadLocal<TxnStore> threadLocalTxn = new ThreadLocal<TxnStore>() {
+    @Override
+    protected TxnStore initialValue() {
+      return null;
+    }
+  };
+
+  private static final ThreadLocal<Map<String, Timer.Context>> timerContexts =
+      new ThreadLocal<Map<String, Timer.Context>>() {
+        @Override
+        protected Map<String, Timer.Context> initialValue() {
+          return new HashMap<>();
+        }
+      };
+
+  public static RawStore getRawStore() {
+    return threadLocalMS.get();
+  }
+
+  static void removeRawStore() {
+    threadLocalMS.remove();
+  }
+
+  // Thread local configuration is needed as many threads could make changes
+  // to the conf using the connection hook
+  static final ThreadLocal<Configuration> threadLocalConf =
+      new ThreadLocal<Configuration>() {
+        @Override
+        protected Configuration initialValue() {
+          return null;
+        }
+      };
+
+  /**
+   * Thread local HMSHandler used during shutdown to notify meta listeners
+   */
+  static final ThreadLocal<HMSHandler> threadLocalHMSHandler = new ThreadLocal<>();
+
+  /**
+   * Thread local Map to keep track of modified meta conf keys
+   */
+  static final ThreadLocal<Map<String, String>> threadLocalModifiedConfig =
+      new ThreadLocal<>();
+
+  private static ExecutorService threadPool;
+
+  static final Logger auditLog = LoggerFactory.getLogger(
+      HiveMetaStore.class.getName() + ".audit");
+
+  private static void logAuditEvent(String cmd) {
+    if (cmd == null) {
+      return;
+    }
+
+    UserGroupInformation ugi;
+    try {
+      ugi = SecurityUtils.getUGI();
+    } catch (Exception ex) {
+      throw new RuntimeException(ex);
+    }
+
+    String address = getIPAddress();
+    if (address == null) {
+      address = "unknown-ip-addr";
+    }
+
+    auditLog.info("ugi={}	ip={}	cmd={}	", ugi.getUserName(), address, cmd);
+  }
+
+  public static String getIPAddress() {
+    if (HiveMetaStore.useSasl) {
+      if (HiveMetaStore.saslServer != null && HiveMetaStore.saslServer.getRemoteAddress() != null) {
+        return HiveMetaStore.saslServer.getRemoteAddress().getHostAddress();
+      }
+    } else {
+      // if kerberos is not enabled
+      return getThreadLocalIpAddress();
+    }
+    return null;
+  }
+
+  private static AtomicInteger nextSerialNum = new AtomicInteger();
+  private static ThreadLocal<Integer> threadLocalId = new ThreadLocal<Integer>() {
+    @Override
+    protected Integer initialValue() {
+      return nextSerialNum.getAndIncrement();
+    }
+  };
+
+  // This will only be set if the metastore is being accessed from a metastore Thrift server,
+  // not if it is from the CLI. Also, only if the TTransport being used to connect is an
+  // instance of TSocket. This is also not set when kerberos is used.
+  private static ThreadLocal<String> threadLocalIpAddress = new ThreadLocal<String>() {
+    @Override
+    protected String initialValue() {
+      return null;
+    }
+  };
+
+  /**
+   * Internal function to notify listeners for meta config change events
+   */
+  private void notifyMetaListeners(String key, String oldValue, String newValue) throws MetaException {
+    for (MetaStoreEventListener listener : listeners) {
+      listener.onConfigChange(new ConfigChangeEvent(this, key, oldValue, newValue));
+    }
+
+    if (transactionalListeners.size() > 0) {
+      // All the fields of this event are final, so no reason to create a new one for each
+      // listener
+      ConfigChangeEvent cce = new ConfigChangeEvent(this, key, oldValue, newValue);
+      for (MetaStoreEventListener transactionalListener : transactionalListeners) {
+        transactionalListener.onConfigChange(cce);
+      }
+    }
+  }
+
+  /**
+   * Internal function to notify listeners to revert back to old values of keys
+   * that were modified during setMetaConf. This would get called from HiveMetaStore#cleanupRawStore
+   */
+  void notifyMetaListenersOnShutDown() {
+    Map<String, String> modifiedConf = threadLocalModifiedConfig.get();
+    if (modifiedConf == null) {
+      // Nothing got modified
+      return;
+    }
+    try {
+      Configuration conf = threadLocalConf.get();
+      if (conf == null) {
+        throw new MetaException("Unexpected: modifiedConf is non-null but conf is null");
+      }
+      // Notify listeners of the changed value
+      for (Map.Entry<String, String> entry : modifiedConf.entrySet()) {
+        String key = entry.getKey();
+        // curr value becomes old and vice-versa
+        String currVal = entry.getValue();
+        String oldVal = conf.get(key);
+        if (!Objects.equals(oldVal, currVal)) {
+          notifyMetaListeners(key, oldVal, currVal);
+        }
+      }
+      logAndAudit("Meta listeners shutdown notification completed.");
+    } catch (MetaException e) {
+      LOG.error("Failed to notify meta listeners on shutdown: ", e);
+    }
+  }
+
+  static void setThreadLocalIpAddress(String ipAddress) {
+    threadLocalIpAddress.set(ipAddress);
+  }
+
+  // This will return null if the metastore is not being accessed from a metastore Thrift server,
+  // or if the TTransport being used to connect is not an instance of TSocket, or if kereberos
+  // is used
+  static String getThreadLocalIpAddress() {
+    return threadLocalIpAddress.get();
+  }
+
+  // Make it possible for tests to check that the right type of PartitionExpressionProxy was
+  // instantiated.
+  @VisibleForTesting
+  PartitionExpressionProxy getExpressionProxy() {
+    return expressionProxy;
+  }
+
+  /**
+   * Use {@link #getThreadId()} instead.
+   *
+   * @return thread id
+   */
+  @Deprecated
+  public static Integer get() {
+    return threadLocalId.get();
+  }
+
+  @Override
+  public int getThreadId() {
+    return threadLocalId.get();
+  }
+
+  public HMSHandler(String name) throws MetaException {
+    this(name, MetastoreConf.newMetastoreConf(), true);
+  }
+
+  public HMSHandler(String name, Configuration conf) throws MetaException {
+    this(name, conf, true);
+  }
+
+  public HMSHandler(String name, Configuration conf, boolean init) throws MetaException {
+    super(name);
+    this.conf = conf;
+    isInTest = MetastoreConf.getBoolVar(this.conf, MetastoreConf.ConfVars.HIVE_IN_TEST);
+    if (threadPool == null) {
+      synchronized (HMSHandler.class) {
+        int numThreads = MetastoreConf.getIntVar(conf, MetastoreConf.ConfVars.FS_HANDLER_THREADS_COUNT);
+        threadPool = Executors.newFixedThreadPool(numThreads,
+            new ThreadFactoryBuilder().setDaemon(true)
+                .setNameFormat("HMSHandler #%d").build());
+      }
+    }
+    if (init) {
+      init();
+    }
+  }
+
+  /**
+   * Use {@link #getConf()} instead.
+   *
+   * @return Configuration object
+   */
+  @Deprecated
+  public Configuration getHiveConf() {
+    return conf;
+  }
+
+  private ClassLoader classLoader;
+  private AlterHandler alterHandler;
+  private List<MetaStorePreEventListener> preListeners;
+  private List<MetaStoreEventListener> listeners;
+  private List<TransactionalMetaStoreEventListener> transactionalListeners;
+  private List<MetaStoreEndFunctionListener> endFunctionListeners;
+  private List<MetaStoreInitListener> initListeners;
+  private MetaStoreFilterHook filterHook;
+  private boolean isServerFilterEnabled = false;
+
+  private Pattern partitionValidationPattern;
+  private final boolean isInTest;
+
+  {
+    classLoader = Thread.currentThread().getContextClassLoader();
+    if (classLoader == null) {
+      classLoader = Configuration.class.getClassLoader();
+    }
+  }
+
+  @Override
+  public List<TransactionalMetaStoreEventListener> getTransactionalListeners() {
+    return transactionalListeners;
+  }
+
+  @Override
+  public List<MetaStoreEventListener> getListeners() {
+    return listeners;
+  }
+
+  @Override
+  public void init() throws MetaException {
+    initListeners = MetaStoreServerUtils.getMetaStoreListeners(
+        MetaStoreInitListener.class, conf, MetastoreConf.getVar(conf, MetastoreConf.ConfVars.INIT_HOOKS));
+    for (MetaStoreInitListener singleInitListener : initListeners) {
+      MetaStoreInitContext context = new MetaStoreInitContext();
+      singleInitListener.onInit(context);
+    }
+
+    String alterHandlerName = MetastoreConf.getVar(conf, MetastoreConf.ConfVars.ALTER_HANDLER);
+    alterHandler = ReflectionUtils.newInstance(JavaUtils.getClass(
+        alterHandlerName, AlterHandler.class), conf);
+    wh = new Warehouse(conf);
+
+    synchronized (HMSHandler.class) {
+      if (currentUrl == null || !currentUrl.equals(MetaStoreInit.getConnectionURL(conf))) {
+        createDefaultDB();
+        createDefaultRoles();
+        addAdminUsers();
+        currentUrl = MetaStoreInit.getConnectionURL(conf);
+      }
+    }
+
+    //Start Metrics
+    if (MetastoreConf.getBoolVar(conf, MetastoreConf.ConfVars.METRICS_ENABLED)) {
+      LOG.info("Begin calculating metadata count metrics.");
+      Metrics.initialize(conf);
+      databaseCount = Metrics.getOrCreateGauge(MetricsConstants.TOTAL_DATABASES);
+      tableCount = Metrics.getOrCreateGauge(MetricsConstants.TOTAL_TABLES);
+      partCount = Metrics.getOrCreateGauge(MetricsConstants.TOTAL_PARTITIONS);
+      updateMetrics();
+
+    }
+
+    preListeners = MetaStoreServerUtils.getMetaStoreListeners(MetaStorePreEventListener.class,
+        conf, MetastoreConf.getVar(conf, MetastoreConf.ConfVars.PRE_EVENT_LISTENERS));
+    preListeners.add(0, new TransactionalValidationListener(conf));
+    listeners = MetaStoreServerUtils.getMetaStoreListeners(MetaStoreEventListener.class, conf,
+        MetastoreConf.getVar(conf, MetastoreConf.ConfVars.EVENT_LISTENERS));
+    listeners.add(new SessionPropertiesListener(conf));
+    transactionalListeners = MetaStoreServerUtils.getMetaStoreListeners(TransactionalMetaStoreEventListener.class,
+        conf, MetastoreConf.getVar(conf, MetastoreConf.ConfVars.TRANSACTIONAL_EVENT_LISTENERS));
+    transactionalListeners.add(new AcidEventListener(conf));
+    if (Metrics.getRegistry() != null) {
+      listeners.add(new HMSMetricsListener(conf));
+    }
+
+    boolean canCachedStoreCanUseEvent = false;
+    for (MetaStoreEventListener listener : transactionalListeners) {
+      if (listener.doesAddEventsToNotificationLogTable()) {
+        canCachedStoreCanUseEvent = true;
+        break;
+      }
+    }
+    if (conf.getBoolean(MetastoreConf.ConfVars.METASTORE_CACHE_CAN_USE_EVENT.getVarname(), false) &&
+        !canCachedStoreCanUseEvent) {
+      throw new MetaException("CahcedStore can not use events for invalidation as there is no " +
+          " TransactionalMetaStoreEventListener to add events to notification table");
+    }
+
+    endFunctionListeners = MetaStoreServerUtils.getMetaStoreListeners(
+        MetaStoreEndFunctionListener.class, conf, MetastoreConf.getVar(conf, MetastoreConf.ConfVars.END_FUNCTION_LISTENERS));
+
+    String partitionValidationRegex =
+        MetastoreConf.getVar(conf, MetastoreConf.ConfVars.PARTITION_NAME_WHITELIST_PATTERN);
+    if (partitionValidationRegex != null && !partitionValidationRegex.isEmpty()) {
+      partitionValidationPattern = Pattern.compile(partitionValidationRegex);
+    } else {
+      partitionValidationPattern = null;
+    }
+
+    // We only initialize once the tasks that need to be run periodically. For remote metastore
+    // these threads are started along with the other housekeeping threads only in the leader
+    // HMS.
+    String leaderHost = MetastoreConf.getVar(conf,
+        MetastoreConf.ConfVars.METASTORE_HOUSEKEEPING_LEADER_HOSTNAME);
+    if (!HiveMetaStore.isMetaStoreRemote() && ((leaderHost == null) || leaderHost.trim().isEmpty())) {
+      startAlwaysTaskThreads(conf);
+    } else if (!HiveMetaStore.isMetaStoreRemote()) {
+      LOG.info("Not starting tasks specified by " + MetastoreConf.ConfVars.TASK_THREADS_ALWAYS.getVarname() +
+          " since " + leaderHost + " is configured to run these tasks.");
+    }
+    expressionProxy = PartFilterExprUtil.createExpressionProxy(conf);
+    fileMetadataManager = new FileMetadataManager(this.getMS(), conf);
+
+    isServerFilterEnabled = getIfServerFilterenabled();
+    filterHook = isServerFilterEnabled ? loadFilterHooks() : null;
+
+    String className = MetastoreConf.getVar(conf, MetastoreConf.ConfVars.METASTORE_METADATA_TRANSFORMER_CLASS);
+    if (className != null && !className.trim().isEmpty()) {
+      Class<?> clazz;
+      try {
+        clazz = conf.getClassByName(className);
+      } catch (ClassNotFoundException e) {
+        LOG.error("Unable to load class " + className, e);
+        throw new IllegalArgumentException(e);
+      }
+      Constructor<?> constructor;
+      try {
+        constructor = clazz.getConstructor(IHMSHandler.class);
+        if (Modifier.isPrivate(constructor.getModifiers()))
+          throw new IllegalArgumentException("Illegal implementation for metadata transformer. Constructor is private");
+        transformer = (IMetaStoreMetadataTransformer) constructor.newInstance(this);
+      } catch (NoSuchMethodException | InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
+        LOG.error("Unable to create instance of class " + className, e);
+        throw new IllegalArgumentException(e);
+      }
+    }
+  }
+
+  static void startAlwaysTaskThreads(Configuration conf) throws MetaException {
+    if (alwaysThreadsInitialized.compareAndSet(false, true)) {
+      ThreadPool.initialize(conf);
+      Collection<String> taskNames =
+          MetastoreConf.getStringCollection(conf, MetastoreConf.ConfVars.TASK_THREADS_ALWAYS);
+      for (String taskName : taskNames) {
+        MetastoreTaskThread task =
+            JavaUtils.newInstance(JavaUtils.getClass(taskName, MetastoreTaskThread.class));
+        task.setConf(conf);
+        long freq = task.runFrequency(TimeUnit.MILLISECONDS);
+        LOG.info("Scheduling for " + task.getClass().getCanonicalName() + " service with " +
+            "frequency " + freq + "ms.");
+        // For backwards compatibility, since some threads used to be hard coded but only run if
+        // frequency was > 0
+        if (freq > 0) {
+          ThreadPool.getPool().scheduleAtFixedRate(task, freq, freq, TimeUnit.MILLISECONDS);
+        }
+      }
+    }
+  }
+
+  /**
+   * Filter is actually enabled only when the configured filter hook is configured, not default, and
+   * enabled in configuration
+   *
+   * @return
+   */
+  private boolean getIfServerFilterenabled() throws MetaException {
+    boolean isEnabled = MetastoreConf.getBoolVar(conf, MetastoreConf.ConfVars.METASTORE_SERVER_FILTER_ENABLED);
+
+    if (!isEnabled) {
+      LOG.info("HMS server filtering is disabled by configuration");
+      return false;
+    }
+
+    String filterHookClassName = MetastoreConf.getVar(conf, MetastoreConf.ConfVars.FILTER_HOOK);
+
+    if (isBlank(filterHookClassName)) {
+      throw new MetaException("HMS server filtering is enabled but no filter hook is configured");
+    }
+
+    if (filterHookClassName.trim().equalsIgnoreCase(DefaultMetaStoreFilterHookImpl.class.getName())) {
+      throw new MetaException("HMS server filtering is enabled but the filter hook is DefaultMetaStoreFilterHookImpl, which does no filtering");
+    }
+
+    LOG.info("HMS server filtering is enabled. The filter class is " + filterHookClassName);
+    return true;
+  }
+
+  private MetaStoreFilterHook loadFilterHooks() throws IllegalStateException {
+    String errorMsg = "Unable to load filter hook at HMS server. ";
+
+    String filterHookClassName = MetastoreConf.getVar(conf, MetastoreConf.ConfVars.FILTER_HOOK);
+    Preconditions.checkState(!isBlank(filterHookClassName));
+
+    try {
+      return (MetaStoreFilterHook) Class.forName(
+          filterHookClassName.trim(), true, JavaUtils.getClassLoader()).getConstructor(
+          Configuration.class).newInstance(conf);
+    } catch (Exception e) {
+      LOG.error(errorMsg, e);
+      throw new IllegalStateException(errorMsg + e.getMessage(), e);
+    }
+  }
+
+  /**
+   * Check if user can access the table associated with the partition. If not, then throw exception
+   * so user cannot access partitions associated with this table
+   * We are not calling Pre event listener for authorization because it requires getting the
+   * table object from DB, more overhead. Instead ,we call filter hook to filter out table if user
+   * has no access. Filter hook only requires table name, not table object. That saves DB access for
+   * table object, and still achieve the same purpose: checking if user can access the specified
+   * table
+   *
+   * @param catName catalog name of the table
+   * @param dbName  database name of the table
+   * @param tblName table name
+   * @throws NoSuchObjectException
+   * @throws MetaException
+   */
+  private void authorizeTableForPartitionMetadata(
+      final String catName, final String dbName, final String tblName)
+      throws NoSuchObjectException, MetaException {
+
+    FilterUtils.checkDbAndTableFilters(
+        isServerFilterEnabled, filterHook, catName, dbName, tblName);
+  }
+
+  private static String addPrefix(String s) {
+    return threadLocalId.get() + ": " + s;
+  }
+
+  /**
+   * Set copy of invoking HMSHandler on thread local
+   */
+  private static void setHMSHandler(HMSHandler handler) {
+    if (threadLocalHMSHandler.get() == null) {
+      threadLocalHMSHandler.set(handler);
+    }
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    threadLocalConf.set(conf);
+    RawStore ms = threadLocalMS.get();
+    if (ms != null) {
+      ms.setConf(conf); // reload if DS related configuration is changed
+    }
+  }
+
+  @Override
+  public Configuration getConf() {
+    Configuration conf = threadLocalConf.get();
+    if (conf == null) {
+      conf = new Configuration(this.conf);
+      threadLocalConf.set(conf);
+    }
+    return conf;
+  }
+
+  private Map<String, String> getModifiedConf() {
+    Map<String, String> modifiedConf = threadLocalModifiedConfig.get();
+    if (modifiedConf == null) {
+      modifiedConf = new HashMap<>();
+      threadLocalModifiedConfig.set(modifiedConf);
+    }
+    return modifiedConf;
+  }
+
+  @Override
+  public Warehouse getWh() {
+    return wh;
+  }
+
+  @Override
+  public void setMetaConf(String key, String value) throws MetaException {
+    MetastoreConf.ConfVars confVar = MetastoreConf.getMetaConf(key);
+    if (confVar == null) {
+      throw new MetaException("Invalid configuration key " + key);
+    }
+    try {
+      confVar.validate(value);
+    } catch (IllegalArgumentException e) {
+      throw new MetaException("Invalid configuration value " + value + " for key " + key +
+          " by " + e.getMessage());
+    }
+    Configuration configuration = getConf();
+    String oldValue = MetastoreConf.get(configuration, key);
+    // Save prev val of the key on threadLocal
+    Map<String, String> modifiedConf = getModifiedConf();
+    if (!modifiedConf.containsKey(key)) {
+      modifiedConf.put(key, oldValue);
+    }
+    // Set invoking HMSHandler on threadLocal, this will be used later to notify
+    // metaListeners in HiveMetaStore#cleanupRawStore
+    setHMSHandler(this);
+    configuration.set(key, value);
+    notifyMetaListeners(key, oldValue, value);
+
+    if (MetastoreConf.ConfVars.TRY_DIRECT_SQL == confVar) {
+      HMSHandler.LOG.info("Direct SQL optimization = {}", value);
+    }
+  }
+
+  @Override
+  public String getMetaConf(String key) throws MetaException {
+    MetastoreConf.ConfVars confVar = MetastoreConf.getMetaConf(key);
+    if (confVar == null) {
+      throw new MetaException("Invalid configuration key " + key);
+    }
+    return getConf().get(key, confVar.getDefaultVal().toString());
+  }
+
+  /**
+   * Get a cached RawStore.
+   *
+   * @return the cached RawStore
+   * @throws MetaException
+   */
+  @Override
+  public RawStore getMS() throws MetaException {
+    Configuration conf = getConf();
+    return getMSForConf(conf);
+  }
+
+  public static RawStore getMSForConf(Configuration conf) throws MetaException {
+    RawStore ms = threadLocalMS.get();
+    if (ms == null) {
+      ms = newRawStoreForConf(conf);
+      try {
+        ms.verifySchema();
+      } catch (MetaException e) {
+        ms.shutdown();
+        throw e;
+      }
+      threadLocalMS.set(ms);
+      ms = threadLocalMS.get();
+      LOG.info("Created RawStore: " + ms + " from thread id: " + Thread.currentThread().getId());
+    }
+    return ms;
+  }
+
+  @Override
+  public TxnStore getTxnHandler() {
+    return getMsThreadTxnHandler(conf);
+  }
+
+  public static TxnStore getMsThreadTxnHandler(Configuration conf) {
+    TxnStore txn = threadLocalTxn.get();
+    if (txn == null) {
+      txn = TxnUtils.getTxnStore(conf);
+      threadLocalTxn.set(txn);
+    }
+    return txn;
+  }
+
+  static RawStore newRawStoreForConf(Configuration conf) throws MetaException {
+    Configuration newConf = new Configuration(conf);
+    String rawStoreClassName = MetastoreConf.getVar(newConf, MetastoreConf.ConfVars.RAW_STORE_IMPL);
+    LOG.info(addPrefix("Opening raw store with implementation class:" + rawStoreClassName));
+    return RawStoreProxy.getProxy(newConf, conf, rawStoreClassName, threadLocalId.get());
+  }
+
+  @VisibleForTesting
+  public static void createDefaultCatalog(RawStore ms, Warehouse wh) throws MetaException,
+      InvalidOperationException {
+    try {
+      Catalog defaultCat = ms.getCatalog(DEFAULT_CATALOG_NAME);
+      // Null check because in some test cases we get a null from ms.getCatalog.
+      if (defaultCat != null && defaultCat.getLocationUri().equals("TBD")) {
+        // One time update issue.  When the new 'hive' catalog is created in an upgrade the
+        // script does not know the location of the warehouse.  So we need to update it.
+        LOG.info("Setting location of default catalog, as it hasn't been done after upgrade");
+        defaultCat.setLocationUri(wh.getWhRoot().toString());
+        ms.alterCatalog(defaultCat.getName(), defaultCat);
+      }
+
+    } catch (NoSuchObjectException e) {
+      Catalog cat = new Catalog(DEFAULT_CATALOG_NAME, wh.getWhRoot().toString());
+      long time = System.currentTimeMillis() / 1000;
+      cat.setCreateTime((int) time);
+      cat.setDescription(Warehouse.DEFAULT_CATALOG_COMMENT);
+      ms.createCatalog(cat);
+    }
+  }
+
+  private void createDefaultDB_core(RawStore ms) throws MetaException, InvalidObjectException {
+    try {
+      ms.getDatabase(DEFAULT_CATALOG_NAME, DEFAULT_DATABASE_NAME);
+    } catch (NoSuchObjectException e) {
+      LOG.info("Started creating a default database with name: " + DEFAULT_DATABASE_NAME);
+      Database db = new Database(DEFAULT_DATABASE_NAME, DEFAULT_DATABASE_COMMENT,
+          wh.getDefaultDatabasePath(DEFAULT_DATABASE_NAME, true).toString(), null);
+      db.setOwnerName(HiveMetaStore.PUBLIC);
+      db.setOwnerType(PrincipalType.ROLE);
+      db.setCatalogName(DEFAULT_CATALOG_NAME);
+      long time = System.currentTimeMillis() / 1000;
+      db.setCreateTime((int) time);
+      ms.createDatabase(db);
+      LOG.info("Successfully created a default database with name: " + DEFAULT_DATABASE_NAME);
+    }
+  }
+
+  /**
+   * create default database if it doesn't exist.
+   * <p>
+   * This is a potential contention when HiveServer2 using embedded metastore and Metastore
+   * Server try to concurrently invoke createDefaultDB. If one failed, JDOException was caught
+   * for one more time try, if failed again, simply ignored by warning, which meant another
+   * succeeds.
+   *
+   * @throws MetaException
+   */
+  private void createDefaultDB() throws MetaException {
+    try {
+      RawStore ms = getMS();
+      createDefaultCatalog(ms, wh);
+      createDefaultDB_core(ms);
+    } catch (JDOException e) {
+      LOG.warn("Retrying creating default database after error: " + e.getMessage(), e);
+      try {
+        RawStore ms = getMS();
+        createDefaultCatalog(ms, wh);
+        createDefaultDB_core(ms);
+      } catch (InvalidObjectException | InvalidOperationException e1) {
+        throw new MetaException(e1.getMessage());
+      }
+    } catch (InvalidObjectException | InvalidOperationException e) {
+      throw new MetaException(e.getMessage());
+    }
+  }
+
+  /**
+   * create default roles if they don't exist.
+   * <p>
+   * This is a potential contention when HiveServer2 using embedded metastore and Metastore
+   * Server try to concurrently invoke createDefaultRoles. If one failed, JDOException was caught
+   * for one more time try, if failed again, simply ignored by warning, which meant another
+   * succeeds.
+   *
+   * @throws MetaException
+   */
+  private void createDefaultRoles() throws MetaException {
+    try {
+      createDefaultRoles_core();
+    } catch (JDOException e) {
+      LOG.warn("Retrying creating default roles after error: " + e.getMessage(), e);
+      createDefaultRoles_core();
+    }
+  }
+
+  private void createDefaultRoles_core() throws MetaException {
+
+    RawStore ms = getMS();
+    try {
+      ms.addRole(HiveMetaStore.ADMIN, HiveMetaStore.ADMIN);

Review comment:
       This constant is only used by HMSHandler, so it should be moved to HMSHandler. Please also check other constants in HiveMetaStore that are only used by HMSHandler. Please migrate all.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] dataproc-metastore commented on a change in pull request #1777: HIVE-24470 - Separate HiveMetastore Thrift and Driver logic

Posted by GitBox <gi...@apache.org>.
dataproc-metastore commented on a change in pull request #1777:
URL: https://github.com/apache/hive/pull/1777#discussion_r546277938



##########
File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HMSHandler.java
##########
@@ -0,0 +1,10137 @@
+package org.apache.hadoop.hive.metastore;
+
+import com.codahale.metrics.Counter;
+import com.codahale.metrics.Timer;
+import com.facebook.fb303.FacebookBase;
+import com.facebook.fb303.fb_status;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.base.Splitter;
+import com.google.common.base.Supplier;
+import com.google.common.base.Suppliers;
+import com.google.common.collect.Lists;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.AcidConstants;
+import org.apache.hadoop.hive.common.AcidMetaDataFile;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.common.TableName;
+import org.apache.hadoop.hive.common.ValidReaderWriteIdList;
+import org.apache.hadoop.hive.common.ValidWriteIdList;
+import org.apache.hadoop.hive.common.repl.ReplConst;
+import org.apache.hadoop.hive.metastore.api.*;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.events.AbortTxnEvent;
+import org.apache.hadoop.hive.metastore.events.AcidWriteEvent;
+import org.apache.hadoop.hive.metastore.events.AddCheckConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddDefaultConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddForeignKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddNotNullConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AddPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AddPrimaryKeyEvent;
+import org.apache.hadoop.hive.metastore.events.AddSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AddUniqueConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.AllocWriteIdEvent;
+import org.apache.hadoop.hive.metastore.events.AlterCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.AlterDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.AlterISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.AlterPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.AlterTableEvent;
+import org.apache.hadoop.hive.metastore.events.CommitTxnEvent;
+import org.apache.hadoop.hive.metastore.events.ConfigChangeEvent;
+import org.apache.hadoop.hive.metastore.events.CreateCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.CreateDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.CreateFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.CreateISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.CreateTableEvent;
+import org.apache.hadoop.hive.metastore.events.DeletePartitionColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DeleteTableColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.DropCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.DropConstraintEvent;
+import org.apache.hadoop.hive.metastore.events.DropDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.DropFunctionEvent;
+import org.apache.hadoop.hive.metastore.events.DropISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.DropPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.DropSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.DropTableEvent;
+import org.apache.hadoop.hive.metastore.events.InsertEvent;
+import org.apache.hadoop.hive.metastore.events.LoadPartitionDoneEvent;
+import org.apache.hadoop.hive.metastore.events.OpenTxnEvent;
+import org.apache.hadoop.hive.metastore.events.PreAddPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAddSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.PreAlterTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreAuthorizationCallEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreCreateTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropPartitionEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.PreDropTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreEventContext;
+import org.apache.hadoop.hive.metastore.events.PreLoadPartitionDoneEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadCatalogEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadDatabaseEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadISchemaEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadTableEvent;
+import org.apache.hadoop.hive.metastore.events.PreReadhSchemaVersionEvent;
+import org.apache.hadoop.hive.metastore.events.UpdatePartitionColumnStatEvent;
+import org.apache.hadoop.hive.metastore.events.UpdateTableColumnStatEvent;
+import org.apache.hadoop.hive.metastore.messaging.EventMessage;
+import org.apache.hadoop.hive.metastore.metrics.Metrics;
+import org.apache.hadoop.hive.metastore.metrics.MetricsConstants;
+import org.apache.hadoop.hive.metastore.metrics.PerfLogger;
+import org.apache.hadoop.hive.metastore.partition.spec.PartitionSpecProxy;
+import org.apache.hadoop.hive.metastore.txn.CompactionInfo;
+import org.apache.hadoop.hive.metastore.txn.TxnStore;
+import org.apache.hadoop.hive.metastore.txn.TxnUtils;
+import org.apache.hadoop.hive.metastore.utils.FileUtils;
+import org.apache.hadoop.hive.metastore.utils.FilterUtils;
+import org.apache.hadoop.hive.metastore.utils.HdfsUtils;
+import org.apache.hadoop.hive.metastore.utils.JavaUtils;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreServerUtils;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hadoop.hive.metastore.utils.MetastoreVersionInfo;
+import org.apache.hadoop.hive.metastore.utils.SecurityUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.thrift.TException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.jdo.JDOException;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Modifier;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.nio.ByteBuffer;
+import java.security.PrivilegedExceptionAction;
+import java.util.AbstractMap;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.BitSet;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.regex.Pattern;
+
+import static org.apache.commons.lang3.StringUtils.isBlank;
+import static org.apache.commons.lang3.StringUtils.join;
+import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_CATALOG_NAME;
+import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_COMMENT;
+import static org.apache.hadoop.hive.metastore.Warehouse.DEFAULT_DATABASE_NAME;
+import static org.apache.hadoop.hive.metastore.Warehouse.getCatalogQualifiedTableName;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.CAT_NAME;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.DB_NAME;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.getDefaultCatalog;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.parseDbName;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.prependCatalogToDbName;
+import static org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.prependNotNullCatToDbName;
+
+public class HMSHandler extends FacebookBase implements IHMSHandler {
+  public static final Logger LOG = HiveMetaStore.LOG;

Review comment:
       HMSHandler should use its own logger instead of HiveMetaStore's




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] Noremac201 closed pull request #1777: HIVE-24470 - Separate HiveMetastore Thrift and Driver logic

Posted by GitBox <gi...@apache.org>.
Noremac201 closed pull request #1777:
URL: https://github.com/apache/hive/pull/1777


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org