You are viewing a plain text version of this content. The canonical link for it is here.
Posted to gitbox@hive.apache.org by GitBox <gi...@apache.org> on 2020/12/08 23:23:53 UTC

[GitHub] [hive] miklosgergely opened a new pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

miklosgergely opened a new pull request #1756:
URL: https://github.com/apache/hive/pull/1756


   ### What changes were proposed in this pull request?
   Move the codes used by show commands only next to the classes processing those commands.
   
   ### Why are the changes needed?
   Move the codes from org.apache.hadoop.hive.ql.metadata.formatting to the show command related direcctories, cutting them to pieces to their specific commands, or the utility classes if used by multiple commands.
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   ### How was this patch tested?
   All the unit tests and q tests are still running.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550804246



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/TextShowTableStatusFormatter.java
##########
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS commands to text format.
+ */
+public class TextShowTableStatusFormatter extends ShowTableStatusFormatter {
+  @Override
+  public void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition partition)
+      throws HiveException {
+    try {
+      for (Table table : tables) {
+        writeBasicInfo(out, table);
+        writeStorageInfo(out, partition, table);
+        writeColumnsInfo(out, table);
+        writeFileSystemInfo(out, db, conf, partition, table);
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void writeBasicInfo(DataOutputStream out, Table table) throws IOException, UnsupportedEncodingException {
+    out.write(("tableName:" + table.getTableName()).getBytes("UTF-8"));
+    out.write(Utilities.newLineCode);
+    out.write(("owner:" + table.getOwner()).getBytes("UTF-8"));
+    out.write(Utilities.newLineCode);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924057



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);
+      formatOutput(new String[] {columnName}, constraintsInfo);
+    }
+  }
+
+  private void getForeignKeysInformation(StringBuilder constraintsInfo, ForeignKeyInfo constraint) {
+    formatOutput("Table:", constraint.getChildDatabaseName() + "." + constraint.getChildTableName(), constraintsInfo);
+    Map<String, List<ForeignKeyCol>> foreignKeys = constraint.getForeignKeys();
+    if (MapUtils.isNotEmpty(foreignKeys)) {
+      for (Map.Entry<String, List<ForeignKeyCol>> entry : foreignKeys.entrySet()) {
+        getForeignKeyRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getForeignKeyRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<ForeignKeyCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (ForeignKeyCol column : columns) {
+        String[] fields = new String[3];
+        fields[0] = "Parent Column Name:" +
+            column.parentDatabaseName + "."+ column.parentTableName + "." + column.parentColName;
+        fields[1] = "Column Name:" + column.childColName;
+        fields[2] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getUniqueConstraintsInformation(StringBuilder constraintsInfo, UniqueConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<UniqueConstraintCol>> uniqueConstraints = constraint.getUniqueConstraints();
+    if (MapUtils.isNotEmpty(uniqueConstraints)) {
+      for (Map.Entry<String, List<UniqueConstraintCol>> entry : uniqueConstraints.entrySet()) {
+        getUniqueConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getUniqueConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<UniqueConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (UniqueConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getNotNullConstraintsInformation(StringBuilder constraintsInfo, NotNullConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, String> notNullConstraints = constraint.getNotNullConstraints();
+    if (MapUtils.isNotEmpty(notNullConstraints)) {
+      for (Map.Entry<String, String> entry : notNullConstraints.entrySet()) {
+        formatOutput("Constraint Name:", entry.getKey(), constraintsInfo);
+        formatOutput("Column Name:", entry.getValue(), constraintsInfo);
+        constraintsInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  private void getDefaultConstraintsInformation(StringBuilder constraintsInfo, DefaultConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<DefaultConstraintCol>> defaultConstraints = constraint.getDefaultConstraints();
+    if (MapUtils.isNotEmpty(defaultConstraints)) {
+      for (Map.Entry<String, List<DefaultConstraintCol>> entry : defaultConstraints.entrySet()) {
+        getDefaultConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getDefaultConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<DefaultConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (DefaultConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Default Value:" + column.defaultVal;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getCheckConstraintsInformation(StringBuilder constraintsInfo, CheckConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<CheckConstraintCol>> checkConstraints = constraint.getCheckConstraints();
+    if (MapUtils.isNotEmpty(checkConstraints)) {
+      for (Map.Entry<String, List<CheckConstraintCol>> entry : checkConstraints.entrySet()) {
+        getCheckConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getCheckConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<CheckConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (CheckConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Check Value:" + column.checkExpression;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void addExtendedTableData(DataOutputStream out, Table table, Partition partition) throws IOException {
+    if (partition != null) {
+      out.write(("Detailed Partition Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(partition.getTPartition().toString().getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    } else {
+      out.write(("Detailed Table Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      String tableDesc = HiveStringUtils.escapeJava(table.getTTable().toString());
+      out.write(tableDesc.getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty

Review comment:
       Fixed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);
+      formatOutput(new String[] {columnName}, constraintsInfo);
+    }
+  }
+
+  private void getForeignKeysInformation(StringBuilder constraintsInfo, ForeignKeyInfo constraint) {
+    formatOutput("Table:", constraint.getChildDatabaseName() + "." + constraint.getChildTableName(), constraintsInfo);
+    Map<String, List<ForeignKeyCol>> foreignKeys = constraint.getForeignKeys();
+    if (MapUtils.isNotEmpty(foreignKeys)) {
+      for (Map.Entry<String, List<ForeignKeyCol>> entry : foreignKeys.entrySet()) {
+        getForeignKeyRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getForeignKeyRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<ForeignKeyCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (ForeignKeyCol column : columns) {
+        String[] fields = new String[3];
+        fields[0] = "Parent Column Name:" +
+            column.parentDatabaseName + "."+ column.parentTableName + "." + column.parentColName;
+        fields[1] = "Column Name:" + column.childColName;
+        fields[2] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getUniqueConstraintsInformation(StringBuilder constraintsInfo, UniqueConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<UniqueConstraintCol>> uniqueConstraints = constraint.getUniqueConstraints();
+    if (MapUtils.isNotEmpty(uniqueConstraints)) {
+      for (Map.Entry<String, List<UniqueConstraintCol>> entry : uniqueConstraints.entrySet()) {
+        getUniqueConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getUniqueConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<UniqueConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (UniqueConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getNotNullConstraintsInformation(StringBuilder constraintsInfo, NotNullConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, String> notNullConstraints = constraint.getNotNullConstraints();
+    if (MapUtils.isNotEmpty(notNullConstraints)) {
+      for (Map.Entry<String, String> entry : notNullConstraints.entrySet()) {
+        formatOutput("Constraint Name:", entry.getKey(), constraintsInfo);
+        formatOutput("Column Name:", entry.getValue(), constraintsInfo);
+        constraintsInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  private void getDefaultConstraintsInformation(StringBuilder constraintsInfo, DefaultConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<DefaultConstraintCol>> defaultConstraints = constraint.getDefaultConstraints();
+    if (MapUtils.isNotEmpty(defaultConstraints)) {
+      for (Map.Entry<String, List<DefaultConstraintCol>> entry : defaultConstraints.entrySet()) {
+        getDefaultConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getDefaultConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<DefaultConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (DefaultConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Default Value:" + column.defaultVal;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getCheckConstraintsInformation(StringBuilder constraintsInfo, CheckConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<CheckConstraintCol>> checkConstraints = constraint.getCheckConstraints();
+    if (MapUtils.isNotEmpty(checkConstraints)) {
+      for (Map.Entry<String, List<CheckConstraintCol>> entry : checkConstraints.entrySet()) {
+        getCheckConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getCheckConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<CheckConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (CheckConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Check Value:" + column.checkExpression;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void addExtendedTableData(DataOutputStream out, Table table, Partition partition) throws IOException {
+    if (partition != null) {
+      out.write(("Detailed Partition Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(partition.getTPartition().toString().getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    } else {
+      out.write(("Detailed Table Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      String tableDesc = HiveStringUtils.escapeJava(table.getTTable().toString());
+      out.write(tableDesc.getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    }
+  }
+
+  private void addExtendedConstraintData(DataOutputStream out, Table table)
+      throws IOException, UnsupportedEncodingException {
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      out.write(("Constraints").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+        out.write(table.getPrimaryKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+        out.write(table.getForeignKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+        out.write(table.getUniqueKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+        out.write(table.getNotNullConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+        out.write(table.getDefaultConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+        out.write(table.getCheckConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+    }

Review comment:
       Fixed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);
+      formatOutput(new String[] {columnName}, constraintsInfo);
+    }
+  }
+
+  private void getForeignKeysInformation(StringBuilder constraintsInfo, ForeignKeyInfo constraint) {
+    formatOutput("Table:", constraint.getChildDatabaseName() + "." + constraint.getChildTableName(), constraintsInfo);
+    Map<String, List<ForeignKeyCol>> foreignKeys = constraint.getForeignKeys();
+    if (MapUtils.isNotEmpty(foreignKeys)) {
+      for (Map.Entry<String, List<ForeignKeyCol>> entry : foreignKeys.entrySet()) {
+        getForeignKeyRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getForeignKeyRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<ForeignKeyCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (ForeignKeyCol column : columns) {
+        String[] fields = new String[3];
+        fields[0] = "Parent Column Name:" +
+            column.parentDatabaseName + "."+ column.parentTableName + "." + column.parentColName;
+        fields[1] = "Column Name:" + column.childColName;
+        fields[2] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getUniqueConstraintsInformation(StringBuilder constraintsInfo, UniqueConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<UniqueConstraintCol>> uniqueConstraints = constraint.getUniqueConstraints();
+    if (MapUtils.isNotEmpty(uniqueConstraints)) {
+      for (Map.Entry<String, List<UniqueConstraintCol>> entry : uniqueConstraints.entrySet()) {
+        getUniqueConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getUniqueConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<UniqueConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (UniqueConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getNotNullConstraintsInformation(StringBuilder constraintsInfo, NotNullConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, String> notNullConstraints = constraint.getNotNullConstraints();
+    if (MapUtils.isNotEmpty(notNullConstraints)) {
+      for (Map.Entry<String, String> entry : notNullConstraints.entrySet()) {
+        formatOutput("Constraint Name:", entry.getKey(), constraintsInfo);
+        formatOutput("Column Name:", entry.getValue(), constraintsInfo);
+        constraintsInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  private void getDefaultConstraintsInformation(StringBuilder constraintsInfo, DefaultConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<DefaultConstraintCol>> defaultConstraints = constraint.getDefaultConstraints();
+    if (MapUtils.isNotEmpty(defaultConstraints)) {
+      for (Map.Entry<String, List<DefaultConstraintCol>> entry : defaultConstraints.entrySet()) {
+        getDefaultConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getDefaultConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<DefaultConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (DefaultConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Default Value:" + column.defaultVal;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getCheckConstraintsInformation(StringBuilder constraintsInfo, CheckConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<CheckConstraintCol>> checkConstraints = constraint.getCheckConstraints();
+    if (MapUtils.isNotEmpty(checkConstraints)) {
+      for (Map.Entry<String, List<CheckConstraintCol>> entry : checkConstraints.entrySet()) {
+        getCheckConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getCheckConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<CheckConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (CheckConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Check Value:" + column.checkExpression;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void addExtendedTableData(DataOutputStream out, Table table, Partition partition) throws IOException {
+    if (partition != null) {
+      out.write(("Detailed Partition Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(partition.getTPartition().toString().getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    } else {
+      out.write(("Detailed Table Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      String tableDesc = HiveStringUtils.escapeJava(table.getTTable().toString());
+      out.write(tableDesc.getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    }
+  }
+
+  private void addExtendedConstraintData(DataOutputStream out, Table table)
+      throws IOException, UnsupportedEncodingException {
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      out.write(("Constraints").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+        out.write(table.getPrimaryKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+        out.write(table.getForeignKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+        out.write(table.getUniqueKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+        out.write(table.getNotNullConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+        out.write(table.getDefaultConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+        out.write(table.getCheckConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+    }
+  }
+
+  private void addExtendedStorageData(DataOutputStream out, Table table)
+      throws IOException, UnsupportedEncodingException {
+    if (table.getStorageHandlerInfo() != null) {
+      out.write(("StorageHandlerInfo").getBytes("UTF-8"));
+      out.write(Utilities.newLineCode);
+      out.write(table.getStorageHandlerInfo().formatAsText().getBytes("UTF-8"));
+      out.write(Utilities.newLineCode);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924330



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/JsonShowTableStatusFormatter.java
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter.JsonDescTableFormatter;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW TABLE STATUS commands to json format.
+ */
+public class JsonShowTableStatusFormatter extends ShowTableStatusFormatter {
+  @Override
+  public void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition partition)
+      throws HiveException {
+    List<Map<String, Object>> tableData = new ArrayList<>();
+    try {
+      for (Table table : tables) {
+        tableData.add(makeOneTableStatus(table, db, conf, partition));
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+    ShowUtils.asJson(out, MapBuilder.create().put("tables", tableData).build());
+  }
+
+  private Map<String, Object> makeOneTableStatus(Table table, Hive db, HiveConf conf, Partition partition)
+      throws HiveException, IOException {
+    StorageInfo storageInfo = getStorageInfo(table, partition);
+
+    MapBuilder builder = MapBuilder.create();
+    builder.put("tableName", table.getTableName());
+    builder.put("ownerType", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null");
+    builder.put("owner", table.getOwner());
+    builder.put("location", storageInfo.location);
+    builder.put("inputFormat", storageInfo.inputFormatClass);
+    builder.put("outputFormat", storageInfo.outputFormatClass);
+    builder.put("columns", JsonDescTableFormatter.createColumnsInfo(table.getCols(), new ArrayList<>()));
+
+    builder.put("partitioned", table.isPartitioned());
+    if (table.isPartitioned()) {
+      builder.put("partitionColumns", JsonDescTableFormatter.createColumnsInfo(table.getPartCols(), new ArrayList<>()));

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924846



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();
+    indentMultilineValue(value, tableInfo, new int[] {0, colNameLength}, true);
+  }
+
+  /**
+   * Prints the name value pair
+   * If the output is padded then unescape the value, so it could be printed in multiple lines.
+   * In this case it assumes the pair is already indented with a field delimiter
+   * 
+   * @param name The field name to print
+   * @param value The value t print
+   * @param tableInfo The target builder
+   * @param isOutputPadded Should the value printed as a padded string?
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo, boolean isOutputPadded) {
+    String unescapedValue = (isOutputPadded && value != null) ?
+        value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+    formatOutput(name, unescapedValue, tableInfo);
+  }
+
+  /**
+   * Indent processing for multi-line values.
+   * Values should be indented the same amount on each line.
+   * If the first line comment starts indented by k, the following line comments should also be indented by k.
+   * 
+   * @param value the value to write
+   * @param tableInfo the buffer to write to
+   * @param columnWidths the widths of the previous columns
+   * @param printNull print null as a string, or do not print anything
+   */
+  private static void indentMultilineValue(String value, StringBuilder tableInfo, int[] columnWidths,
+      boolean printNull) {
+    if (value == null) {
+      if (printNull) {
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", value));
+      }
+      tableInfo.append(LINE_DELIM);
+    } else {
+      String[] valueSegments = value.split("\n|\r|\r\n");
+      tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[0])).append(LINE_DELIM);
+      for (int i = 1; i < valueSegments.length; i++) {
+        printPadding(tableInfo, columnWidths);
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[i])).append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Print the rigth padding, with the given column widths.
+   * 
+   * @param tableInfo The buffer to write to
+   * @param columnWidths The column widths
+   */
+  private static void printPadding(StringBuilder tableInfo, int[] columnWidths) {
+    for (int columnWidth : columnWidths) {
+      if (columnWidth == 0) {
+        tableInfo.append(FIELD_DELIM);
+      } else {
+        tableInfo.append(String.format("%" + columnWidth + "s" + FIELD_DELIM, ""));
+      }
+    }
+  }
+
+  /**
+   * Helps to format tables in SHOW ... command outputs.
+   */
+  public static class TextMetaDataTable {
+    private List<List<String>> table = new ArrayList<>();
+
+    public void addRow(String... values) {
+      table.add(Lists.<String> newArrayList(values));

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924029



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely merged pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely merged pull request #1756:
URL: https://github.com/apache/hive/pull/1756


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924436



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/JsonShowTableStatusFormatter.java
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter.JsonDescTableFormatter;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW TABLE STATUS commands to json format.
+ */
+public class JsonShowTableStatusFormatter extends ShowTableStatusFormatter {
+  @Override
+  public void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition partition)
+      throws HiveException {
+    List<Map<String, Object>> tableData = new ArrayList<>();
+    try {
+      for (Table table : tables) {
+        tableData.add(makeOneTableStatus(table, db, conf, partition));
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+    ShowUtils.asJson(out, MapBuilder.create().put("tables", tableData).build());
+  }
+
+  private Map<String, Object> makeOneTableStatus(Table table, Hive db, HiveConf conf, Partition partition)
+      throws HiveException, IOException {
+    StorageInfo storageInfo = getStorageInfo(table, partition);
+
+    MapBuilder builder = MapBuilder.create();
+    builder.put("tableName", table.getTableName());
+    builder.put("ownerType", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null");
+    builder.put("owner", table.getOwner());
+    builder.put("location", storageInfo.location);
+    builder.put("inputFormat", storageInfo.inputFormatClass);
+    builder.put("outputFormat", storageInfo.outputFormatClass);
+    builder.put("columns", JsonDescTableFormatter.createColumnsInfo(table.getCols(), new ArrayList<>()));

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550923905



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550923722



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesFormatter.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.database.show;
+
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Formats SHOW DATABASES results.
+ */
+abstract class ShowDatabasesFormatter {
+  static ShowDatabasesFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowDatabasesFormatter();
+    } else {
+      return new TextShowDatabasesFormatter();
+    }
+  }
+
+  abstract void showDatabases(DataOutputStream out, List<String> databases) throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonShowDatabasesFormatter extends ShowDatabasesFormatter {
+    @Override
+    void showDatabases(DataOutputStream out, List<String> databases) throws HiveException {
+      ShowUtils.asJson(out, MapBuilder.create().put("databases", databases).build());
+    }
+  }
+
+  static class TextShowDatabasesFormatter extends ShowDatabasesFormatter {
+    @Override
+    void showDatabases(DataOutputStream out, List<String> databases) throws HiveException {
+      try {
+        for (String database : databases) {
+          out.write(database.getBytes("UTF-8"));

Review comment:
       Fixed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseFormatter.java
##########
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.database.desc;
+
+import org.apache.commons.collections.MapUtils;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.PrincipalType;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * Formats DESC DATABASES results.
+ */
+abstract class DescDatabaseFormatter {
+  static DescDatabaseFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonDescDatabaseFormatter();
+    } else {
+      return new TextDescDatabaseFormatter();
+    }
+  }
+
+  abstract void showDatabaseDescription(DataOutputStream out, String database, String comment, String location,
+      String managedLocation, String ownerName, PrincipalType ownerType, Map<String, String> params)
+      throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonDescDatabaseFormatter extends DescDatabaseFormatter {
+    @Override
+    void showDatabaseDescription(DataOutputStream out, String database, String comment, String location,
+        String managedLocation, String ownerName, PrincipalType ownerType, Map<String, String> params)
+        throws HiveException {
+      MapBuilder builder = MapBuilder.create()
+          .put("database", database)
+          .put("comment", comment)
+          .put("location", location);
+      if (managedLocation != null) {
+        builder.put("managedLocation", managedLocation);
+      }
+      if (ownerName != null) {
+        builder.put("owner", ownerName);
+      }
+      if (ownerType != null) {
+        builder.put("ownerType", ownerType.name());
+      }
+      if (MapUtils.isNotEmpty(params)) {
+        builder.put("params", params);
+      }
+      ShowUtils.asJson(out, builder.build());
+    }
+  }
+
+  static class TextDescDatabaseFormatter extends DescDatabaseFormatter {
+    @Override
+    void showDatabaseDescription(DataOutputStream out, String database, String comment, String location,
+        String managedLocation, String ownerName, PrincipalType ownerType, Map<String, String> params)
+        throws HiveException {
+      try {
+        out.write(database.getBytes("UTF-8"));
+        out.write(Utilities.tabCode);
+        if (comment != null) {
+          out.write(HiveStringUtils.escapeJava(comment).getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (location != null) {
+          out.write(location.getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (managedLocation != null) {
+          out.write(managedLocation.getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (ownerName != null) {
+          out.write(ownerName.getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (ownerType != null) {
+          out.write(ownerType.name().getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (MapUtils.isNotEmpty(params)) {
+          out.write(params.toString().getBytes("UTF-8"));
+        }

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550804054



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");

Review comment:
       Fixed, didn't know about that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924576



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();
+    indentMultilineValue(value, tableInfo, new int[] {0, colNameLength}, true);
+  }
+
+  /**
+   * Prints the name value pair
+   * If the output is padded then unescape the value, so it could be printed in multiple lines.
+   * In this case it assumes the pair is already indented with a field delimiter
+   * 
+   * @param name The field name to print
+   * @param value The value t print
+   * @param tableInfo The target builder
+   * @param isOutputPadded Should the value printed as a padded string?
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo, boolean isOutputPadded) {
+    String unescapedValue = (isOutputPadded && value != null) ?
+        value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+    formatOutput(name, unescapedValue, tableInfo);
+  }
+
+  /**
+   * Indent processing for multi-line values.
+   * Values should be indented the same amount on each line.
+   * If the first line comment starts indented by k, the following line comments should also be indented by k.
+   * 
+   * @param value the value to write
+   * @param tableInfo the buffer to write to
+   * @param columnWidths the widths of the previous columns
+   * @param printNull print null as a string, or do not print anything
+   */
+  private static void indentMultilineValue(String value, StringBuilder tableInfo, int[] columnWidths,
+      boolean printNull) {
+    if (value == null) {
+      if (printNull) {
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", value));
+      }
+      tableInfo.append(LINE_DELIM);
+    } else {
+      String[] valueSegments = value.split("\n|\r|\r\n");
+      tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[0])).append(LINE_DELIM);
+      for (int i = 1; i < valueSegments.length; i++) {
+        printPadding(tableInfo, columnWidths);
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[i])).append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Print the rigth padding, with the given column widths.
+   * 
+   * @param tableInfo The buffer to write to
+   * @param columnWidths The column widths
+   */
+  private static void printPadding(StringBuilder tableInfo, int[] columnWidths) {
+    for (int columnWidth : columnWidths) {
+      if (columnWidth == 0) {
+        tableInfo.append(FIELD_DELIM);
+      } else {
+        tableInfo.append(String.format("%" + columnWidth + "s" + FIELD_DELIM, ""));
+      }
+    }
+  }
+
+  /**
+   * Helps to format tables in SHOW ... command outputs.
+   */
+  public static class TextMetaDataTable {
+    private List<List<String>> table = new ArrayList<>();
+
+    public void addRow(String... values) {
+      table.add(Lists.<String> newArrayList(values));
+    }
+
+    public String renderTable(boolean isOutputPadded) {
+      StringBuilder stringBuilder = new StringBuilder();
+      for (List<String> row : table) {
+        formatOutput(row.toArray(new String[] {}), stringBuilder, isOutputPadded, isOutputPadded);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on pull request #1756:
URL: https://github.com/apache/hive/pull/1756#issuecomment-755524799


   @belugabehr  removed unrelated changes from this patch.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550804520



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/workloadmanagement/resourceplan/show/formatter/TextShowResourcePlanFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.workloadmanagement.resourceplan.show.formatter;
+
+import org.apache.hadoop.hive.metastore.api.WMFullResourcePlan;
+import org.apache.hadoop.hive.metastore.api.WMResourcePlan;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Formats SHOW RESOURCE PLAN(S) results to text format.
+ */
+class TextShowResourcePlanFormatter extends ShowResourcePlanFormatter {
+  @Override
+  public void showResourcePlans(DataOutputStream out, List<WMResourcePlan> resourcePlans) throws HiveException {
+    try {
+      for (WMResourcePlan plan : resourcePlans) {
+        out.write(plan.getName().getBytes(ShowUtils.UTF_8));
+        out.write(Utilities.tabCode);
+        out.write(plan.getStatus().name().getBytes(ShowUtils.UTF_8));
+        out.write(Utilities.tabCode);
+        String queryParallelism = plan.isSetQueryParallelism() ? Integer.toString(plan.getQueryParallelism()) : "null";
+        out.write(queryParallelism.getBytes(ShowUtils.UTF_8));
+        out.write(Utilities.tabCode);
+        String defaultPoolPath = plan.isSetDefaultPoolPath() ? plan.getDefaultPoolPath() : "null";
+        out.write(defaultPoolPath.getBytes(ShowUtils.UTF_8));

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely edited a comment on pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely edited a comment on pull request #1756:
URL: https://github.com/apache/hive/pull/1756#issuecomment-753538962


   @belugabehr I've fixed most of the issues that you've mentioned. I was mainly focused on putting the code to it's proper location, in a manageable structure, and it was great that you've looked at the code thoroughly, finding these suboptimal solutions, thank you for that. 
   
   Regarding the append().append() like issues: these commands are not performance critical (it is not realistic that someone would like to issue 100s of show / desc commands per sec), so in these cases I choose code readability over performance. Still I've fixed those where LINE_DELIM is appended, as it doesn't decrease readability.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely merged pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely merged pull request #1756:
URL: https://github.com/apache/hive/pull/1756


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550923625



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));

Review comment:
       Fixed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));

Review comment:
       Fixed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550929515



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924142



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {
+        fileData.maxFileSize = fileLength;
+      }
+      if (fileLength < fileData.minFileSize) {
+        fileData.minFileSize = fileLength;
+      }
+
+      if (entryStatus.getAccessTime() > fileData.lastAccessTime) {

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924729



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();

Review comment:
       Math#max - fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r552905074



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/drop/AbstractDropPartitionAnalyzer.java
##########
@@ -20,25 +20,17 @@
 
 import java.util.ArrayList;
 import java.util.Collection;
-import java.util.HashMap;

Review comment:
       Removed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r552905713



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/lock/show/ShowLocksAnalyzer.java
##########
@@ -26,9 +26,6 @@
 import org.apache.hadoop.hive.ql.ddl.DDLWork;
 import org.apache.hadoop.hive.ql.exec.Task;
 import org.apache.hadoop.hive.ql.exec.TaskFactory;
-import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager;
-import org.apache.hadoop.hive.ql.lockmgr.LockException;
-import org.apache.hadoop.hive.ql.lockmgr.TxnManagerFactory;

Review comment:
       Removed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550929585



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);

Review comment:
       Fixed..

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550930626



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/process/show/compactions/ShowCompactionsDesc.java
##########
@@ -32,7 +32,8 @@
   private static final long serialVersionUID = 1L;
 
   public static final String SCHEMA =
-      "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid,errormessage#" +
+      "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid," +
+      "errormessage#" +

Review comment:
       Added @formatter:off - @formatter:on instead




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550929524



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550805513



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {

Review comment:
       Totally agree, fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550805694



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {
+        fileData.maxFileSize = fileLength;
+      }
+      if (fileLength < fileData.minFileSize) {
+        fileData.minFileSize = fileLength;
+      }
+
+      if (entryStatus.getAccessTime() > fileData.lastAccessTime) {
+        fileData.lastAccessTime = entryStatus.getAccessTime();
+      }
+      if (entryStatus.getModificationTime() > fileData.lastUpdateTime) {

Review comment:
       Nice catch, fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] belugabehr commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
belugabehr commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550791696



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");

Review comment:
       No need to define this here.  Just use JDK `StandardCharsets`.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();
+    indentMultilineValue(value, tableInfo, new int[] {0, colNameLength}, true);
+  }
+
+  /**
+   * Prints the name value pair
+   * If the output is padded then unescape the value, so it could be printed in multiple lines.
+   * In this case it assumes the pair is already indented with a field delimiter
+   * 
+   * @param name The field name to print
+   * @param value The value t print
+   * @param tableInfo The target builder
+   * @param isOutputPadded Should the value printed as a padded string?
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo, boolean isOutputPadded) {
+    String unescapedValue = (isOutputPadded && value != null) ?
+        value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+    formatOutput(name, unescapedValue, tableInfo);
+  }
+
+  /**
+   * Indent processing for multi-line values.
+   * Values should be indented the same amount on each line.
+   * If the first line comment starts indented by k, the following line comments should also be indented by k.
+   * 
+   * @param value the value to write
+   * @param tableInfo the buffer to write to
+   * @param columnWidths the widths of the previous columns
+   * @param printNull print null as a string, or do not print anything
+   */
+  private static void indentMultilineValue(String value, StringBuilder tableInfo, int[] columnWidths,
+      boolean printNull) {
+    if (value == null) {
+      if (printNull) {
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", value));
+      }
+      tableInfo.append(LINE_DELIM);
+    } else {
+      String[] valueSegments = value.split("\n|\r|\r\n");
+      tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[0])).append(LINE_DELIM);
+      for (int i = 1; i < valueSegments.length; i++) {
+        printPadding(tableInfo, columnWidths);
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[i])).append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Print the rigth padding, with the given column widths.
+   * 
+   * @param tableInfo The buffer to write to
+   * @param columnWidths The column widths
+   */
+  private static void printPadding(StringBuilder tableInfo, int[] columnWidths) {
+    for (int columnWidth : columnWidths) {
+      if (columnWidth == 0) {
+        tableInfo.append(FIELD_DELIM);
+      } else {
+        tableInfo.append(String.format("%" + columnWidth + "s" + FIELD_DELIM, ""));
+      }
+    }
+  }
+
+  /**
+   * Helps to format tables in SHOW ... command outputs.
+   */
+  public static class TextMetaDataTable {
+    private List<List<String>> table = new ArrayList<>();
+
+    public void addRow(String... values) {
+      table.add(Lists.<String> newArrayList(values));
+    }
+
+    public String renderTable(boolean isOutputPadded) {
+      StringBuilder stringBuilder = new StringBuilder();
+      for (List<String> row : table) {
+        formatOutput(row.toArray(new String[] {}), stringBuilder, isOutputPadded, isOutputPadded);

Review comment:
       `new String[0]`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/JsonShowTableStatusFormatter.java
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter.JsonDescTableFormatter;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW TABLE STATUS commands to json format.
+ */
+public class JsonShowTableStatusFormatter extends ShowTableStatusFormatter {
+  @Override
+  public void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition partition)
+      throws HiveException {
+    List<Map<String, Object>> tableData = new ArrayList<>();
+    try {
+      for (Table table : tables) {
+        tableData.add(makeOneTableStatus(table, db, conf, partition));
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+    ShowUtils.asJson(out, MapBuilder.create().put("tables", tableData).build());
+  }
+
+  private Map<String, Object> makeOneTableStatus(Table table, Hive db, HiveConf conf, Partition partition)
+      throws HiveException, IOException {
+    StorageInfo storageInfo = getStorageInfo(table, partition);
+
+    MapBuilder builder = MapBuilder.create();
+    builder.put("tableName", table.getTableName());
+    builder.put("ownerType", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null");
+    builder.put("owner", table.getOwner());
+    builder.put("location", storageInfo.location);
+    builder.put("inputFormat", storageInfo.inputFormatClass);
+    builder.put("outputFormat", storageInfo.outputFormatClass);
+    builder.put("columns", JsonDescTableFormatter.createColumnsInfo(table.getCols(), new ArrayList<>()));

Review comment:
       `Collections.emptyList()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {

Review comment:
       `(CollectionUtils.isNotEmpty(skewedColValues)` (just like immediately below)

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {

Review comment:
       Please add JavaDoc here and also use JDK7+ ability of not needing to explicitly define the Type on the right hand side in several places, for example:
   
   `List<String> realProps = new ArrayList<>();`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {

Review comment:
       I personally hate NULL values.  Can you get rid of this check (`exclude == null`) and simply call this method with `Collections.emptySet()` ?

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {

Review comment:
       Use `StandardCharsets.UTF_8`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");

Review comment:
       Can now just use Java String#join method instead of something third party.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});

Review comment:
       `new String[0]`
   
   https://docs.oracle.com/javase/8/docs/api/java/util/Collection.html#toArray-T:A-

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();

Review comment:
       `Math#min`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesFormatter.java
##########
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.database.show;
+
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Formats SHOW DATABASES results.
+ */
+abstract class ShowDatabasesFormatter {
+  static ShowDatabasesFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowDatabasesFormatter();
+    } else {
+      return new TextShowDatabasesFormatter();
+    }
+  }
+
+  abstract void showDatabases(DataOutputStream out, List<String> databases) throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonShowDatabasesFormatter extends ShowDatabasesFormatter {
+    @Override
+    void showDatabases(DataOutputStream out, List<String> databases) throws HiveException {
+      ShowUtils.asJson(out, MapBuilder.create().put("databases", databases).build());
+    }
+  }
+
+  static class TextShowDatabasesFormatter extends ShowDatabasesFormatter {
+    @Override
+    void showDatabases(DataOutputStream out, List<String> databases) throws HiveException {
+      try {
+        for (String database : databases) {
+          out.write(database.getBytes("UTF-8"));

Review comment:
       `StandardCharsets.UTF_8`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/JsonDescTableFormatter.java
##########
@@ -0,0 +1,265 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+
+import java.io.DataOutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats DESC TABLE results to json format.
+ */
+public class JsonDescTableFormatter extends DescTableFormatter {
+  private static final String COLUMN_NAME = "name";
+  private static final String COLUMN_TYPE = "type";
+  private static final String COLUMN_COMMENT = "comment";
+  private static final String COLUMN_MIN = "min";
+  private static final String COLUMN_MAX = "max";
+  private static final String COLUMN_NUM_NULLS = "numNulls";
+  private static final String COLUMN_NUM_TRUES = "numTrues";
+  private static final String COLUMN_NUM_FALSES = "numFalses";
+  private static final String COLUMN_DISTINCT_COUNT = "distinctCount";
+  private static final String COLUMN_AVG_LENGTH = "avgColLen";
+  private static final String COLUMN_MAX_LENGTH = "maxColLen";
+
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    MapBuilder builder = MapBuilder.create();
+    builder.put("columns", createColumnsInfo(columns, columnStats));
+
+    if (isExtended) {
+      addExtendedInfo(table, partition, builder);
+    }
+
+    ShowUtils.asJson(out, builder.build());
+  }
+
+  public static List<Map<String, Object>> createColumnsInfo(List<FieldSchema> columns,
+      List<ColumnStatisticsObj> columnStatisticsList) {
+    List<Map<String, Object>> columnsInfo = new ArrayList<>();

Review comment:
       `... =new ArrayList<>(columns.size());`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();
+    indentMultilineValue(value, tableInfo, new int[] {0, colNameLength}, true);
+  }
+
+  /**
+   * Prints the name value pair
+   * If the output is padded then unescape the value, so it could be printed in multiple lines.
+   * In this case it assumes the pair is already indented with a field delimiter
+   * 
+   * @param name The field name to print
+   * @param value The value t print
+   * @param tableInfo The target builder
+   * @param isOutputPadded Should the value printed as a padded string?
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo, boolean isOutputPadded) {
+    String unescapedValue = (isOutputPadded && value != null) ?
+        value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+    formatOutput(name, unescapedValue, tableInfo);
+  }
+
+  /**
+   * Indent processing for multi-line values.
+   * Values should be indented the same amount on each line.
+   * If the first line comment starts indented by k, the following line comments should also be indented by k.
+   * 
+   * @param value the value to write
+   * @param tableInfo the buffer to write to
+   * @param columnWidths the widths of the previous columns
+   * @param printNull print null as a string, or do not print anything
+   */
+  private static void indentMultilineValue(String value, StringBuilder tableInfo, int[] columnWidths,
+      boolean printNull) {
+    if (value == null) {
+      if (printNull) {
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", value));
+      }
+      tableInfo.append(LINE_DELIM);
+    } else {
+      String[] valueSegments = value.split("\n|\r|\r\n");
+      tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[0])).append(LINE_DELIM);
+      for (int i = 1; i < valueSegments.length; i++) {
+        printPadding(tableInfo, columnWidths);
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[i])).append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Print the rigth padding, with the given column widths.
+   * 
+   * @param tableInfo The buffer to write to
+   * @param columnWidths The column widths
+   */
+  private static void printPadding(StringBuilder tableInfo, int[] columnWidths) {
+    for (int columnWidth : columnWidths) {
+      if (columnWidth == 0) {
+        tableInfo.append(FIELD_DELIM);
+      } else {
+        tableInfo.append(String.format("%" + columnWidth + "s" + FIELD_DELIM, ""));
+      }
+    }
+  }
+
+  /**
+   * Helps to format tables in SHOW ... command outputs.
+   */
+  public static class TextMetaDataTable {
+    private List<List<String>> table = new ArrayList<>();
+
+    public void addRow(String... values) {
+      table.add(Lists.<String> newArrayList(values));

Review comment:
       `Arrays.asList`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseFormatter.java
##########
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.database.desc;
+
+import org.apache.commons.collections.MapUtils;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.PrincipalType;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.Map;
+
+/**
+ * Formats DESC DATABASES results.
+ */
+abstract class DescDatabaseFormatter {
+  static DescDatabaseFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonDescDatabaseFormatter();
+    } else {
+      return new TextDescDatabaseFormatter();
+    }
+  }
+
+  abstract void showDatabaseDescription(DataOutputStream out, String database, String comment, String location,
+      String managedLocation, String ownerName, PrincipalType ownerType, Map<String, String> params)
+      throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonDescDatabaseFormatter extends DescDatabaseFormatter {
+    @Override
+    void showDatabaseDescription(DataOutputStream out, String database, String comment, String location,
+        String managedLocation, String ownerName, PrincipalType ownerType, Map<String, String> params)
+        throws HiveException {
+      MapBuilder builder = MapBuilder.create()
+          .put("database", database)
+          .put("comment", comment)
+          .put("location", location);
+      if (managedLocation != null) {
+        builder.put("managedLocation", managedLocation);
+      }
+      if (ownerName != null) {
+        builder.put("owner", ownerName);
+      }
+      if (ownerType != null) {
+        builder.put("ownerType", ownerType.name());
+      }
+      if (MapUtils.isNotEmpty(params)) {
+        builder.put("params", params);
+      }
+      ShowUtils.asJson(out, builder.build());
+    }
+  }
+
+  static class TextDescDatabaseFormatter extends DescDatabaseFormatter {
+    @Override
+    void showDatabaseDescription(DataOutputStream out, String database, String comment, String location,
+        String managedLocation, String ownerName, PrincipalType ownerType, Map<String, String> params)
+        throws HiveException {
+      try {
+        out.write(database.getBytes("UTF-8"));
+        out.write(Utilities.tabCode);
+        if (comment != null) {
+          out.write(HiveStringUtils.escapeJava(comment).getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (location != null) {
+          out.write(location.getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (managedLocation != null) {
+          out.write(managedLocation.getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (ownerName != null) {
+          out.write(ownerName.getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (ownerType != null) {
+          out.write(ownerType.name().getBytes("UTF-8"));
+        }
+        out.write(Utilities.tabCode);
+        if (MapUtils.isNotEmpty(params)) {
+          out.write(params.toString().getBytes("UTF-8"));
+        }

Review comment:
       `StandardCharsets.UTF_8`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));

Review comment:
       `new String[0]`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/process/show/compactions/ShowCompactionsDesc.java
##########
@@ -32,7 +32,8 @@
   private static final long serialVersionUID = 1L;
 
   public static final String SCHEMA =
-      "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid,errormessage#" +
+      "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid," +
+      "errormessage#" +

Review comment:
       Remove this formatting change.  Little value and adds to this already large review.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();
+    indentMultilineValue(value, tableInfo, new int[] {0, colNameLength}, true);
+  }
+
+  /**
+   * Prints the name value pair
+   * If the output is padded then unescape the value, so it could be printed in multiple lines.
+   * In this case it assumes the pair is already indented with a field delimiter
+   * 
+   * @param name The field name to print
+   * @param value The value t print
+   * @param tableInfo The target builder
+   * @param isOutputPadded Should the value printed as a padded string?
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo, boolean isOutputPadded) {
+    String unescapedValue = (isOutputPadded && value != null) ?
+        value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+    formatOutput(name, unescapedValue, tableInfo);
+  }
+
+  /**
+   * Indent processing for multi-line values.
+   * Values should be indented the same amount on each line.
+   * If the first line comment starts indented by k, the following line comments should also be indented by k.
+   * 
+   * @param value the value to write
+   * @param tableInfo the buffer to write to
+   * @param columnWidths the widths of the previous columns
+   * @param printNull print null as a string, or do not print anything
+   */
+  private static void indentMultilineValue(String value, StringBuilder tableInfo, int[] columnWidths,
+      boolean printNull) {
+    if (value == null) {
+      if (printNull) {
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", value));
+      }
+      tableInfo.append(LINE_DELIM);
+    } else {
+      String[] valueSegments = value.split("\n|\r|\r\n");
+      tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[0])).append(LINE_DELIM);
+      for (int i = 1; i < valueSegments.length; i++) {
+        printPadding(tableInfo, columnWidths);
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[i])).append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Print the rigth padding, with the given column widths.
+   * 
+   * @param tableInfo The buffer to write to
+   * @param columnWidths The column widths
+   */
+  private static void printPadding(StringBuilder tableInfo, int[] columnWidths) {
+    for (int columnWidth : columnWidths) {
+      if (columnWidth == 0) {
+        tableInfo.append(FIELD_DELIM);
+      } else {
+        tableInfo.append(String.format("%" + columnWidth + "s" + FIELD_DELIM, ""));
+      }
+    }
+  }
+
+  /**
+   * Helps to format tables in SHOW ... command outputs.
+   */
+  public static class TextMetaDataTable {
+    private List<List<String>> table = new ArrayList<>();
+
+    public void addRow(String... values) {
+      table.add(Lists.<String> newArrayList(values));
+    }
+
+    public String renderTable(boolean isOutputPadded) {
+      StringBuilder stringBuilder = new StringBuilder();
+      for (List<String> row : table) {
+        formatOutput(row.toArray(new String[] {}), stringBuilder, isOutputPadded, isOutputPadded);
+      }
+      return stringBuilder.toString();
+    }
+
+    public void transpose() {
+      if (table.size() == 0) {
+        return;
+      }
+      List<List<String>> newTable = new ArrayList<List<String>>();
+      for (int i = 0; i < table.get(0).size(); i++) {
+        newTable.add(new ArrayList<>());
+      }
+      for (List<String> sourceRow : table) {
+        if (newTable.size() != sourceRow.size()) {
+          throw new RuntimeException("invalid table size");
+        }
+        for (int i = 0; i < sourceRow.size(); i++) {
+          newTable.get(i).add(sourceRow.get(i));
+        }
+      }
+      table = newTable;
+    }

Review comment:
       There's got to be a better way of doing this...
   
   List#addAll or something other than 1-by-1 iteration.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/TextShowTableStatusFormatter.java
##########
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS commands to text format.
+ */
+public class TextShowTableStatusFormatter extends ShowTableStatusFormatter {
+  @Override
+  public void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition partition)
+      throws HiveException {
+    try {
+      for (Table table : tables) {
+        writeBasicInfo(out, table);
+        writeStorageInfo(out, partition, table);
+        writeColumnsInfo(out, table);
+        writeFileSystemInfo(out, db, conf, partition, table);
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void writeBasicInfo(DataOutputStream out, Table table) throws IOException, UnsupportedEncodingException {
+    out.write(("tableName:" + table.getTableName()).getBytes("UTF-8"));
+    out.write(Utilities.newLineCode);
+    out.write(("owner:" + table.getOwner()).getBytes("UTF-8"));
+    out.write(Utilities.newLineCode);

Review comment:
       `StandardCharsets`
   
   Also,...
   
   ````
   out.write("owner:");
   out.write(tabler.getOwner());
   ```
   
   No need to concat strings here.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();

Review comment:
       `Math.max`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {
+        fileData.maxFileSize = fileLength;
+      }
+      if (fileLength < fileData.minFileSize) {

Review comment:
       `Math.max`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/JsonShowTableStatusFormatter.java
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter.JsonDescTableFormatter;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW TABLE STATUS commands to json format.
+ */
+public class JsonShowTableStatusFormatter extends ShowTableStatusFormatter {
+  @Override
+  public void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition partition)
+      throws HiveException {
+    List<Map<String, Object>> tableData = new ArrayList<>();
+    try {
+      for (Table table : tables) {
+        tableData.add(makeOneTableStatus(table, db, conf, partition));
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+    ShowUtils.asJson(out, MapBuilder.create().put("tables", tableData).build());
+  }
+
+  private Map<String, Object> makeOneTableStatus(Table table, Hive db, HiveConf conf, Partition partition)
+      throws HiveException, IOException {
+    StorageInfo storageInfo = getStorageInfo(table, partition);
+
+    MapBuilder builder = MapBuilder.create();
+    builder.put("tableName", table.getTableName());
+    builder.put("ownerType", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null");
+    builder.put("owner", table.getOwner());
+    builder.put("location", storageInfo.location);
+    builder.put("inputFormat", storageInfo.inputFormatClass);
+    builder.put("outputFormat", storageInfo.outputFormatClass);
+    builder.put("columns", JsonDescTableFormatter.createColumnsInfo(table.getCols(), new ArrayList<>()));
+
+    builder.put("partitioned", table.isPartitioned());
+    if (table.isPartitioned()) {
+      builder.put("partitionColumns", JsonDescTableFormatter.createColumnsInfo(table.getPartCols(), new ArrayList<>()));

Review comment:
       `Collections.emptyList()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {

Review comment:
       `Math.max`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {

Review comment:
       `Math.max`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);

Review comment:
       `"Cannot access File System. File System status will be unknown.`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);
+      formatOutput(new String[] {columnName}, constraintsInfo);
+    }
+  }
+
+  private void getForeignKeysInformation(StringBuilder constraintsInfo, ForeignKeyInfo constraint) {
+    formatOutput("Table:", constraint.getChildDatabaseName() + "." + constraint.getChildTableName(), constraintsInfo);
+    Map<String, List<ForeignKeyCol>> foreignKeys = constraint.getForeignKeys();
+    if (MapUtils.isNotEmpty(foreignKeys)) {
+      for (Map.Entry<String, List<ForeignKeyCol>> entry : foreignKeys.entrySet()) {
+        getForeignKeyRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getForeignKeyRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<ForeignKeyCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (ForeignKeyCol column : columns) {
+        String[] fields = new String[3];
+        fields[0] = "Parent Column Name:" +
+            column.parentDatabaseName + "."+ column.parentTableName + "." + column.parentColName;
+        fields[1] = "Column Name:" + column.childColName;
+        fields[2] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getUniqueConstraintsInformation(StringBuilder constraintsInfo, UniqueConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<UniqueConstraintCol>> uniqueConstraints = constraint.getUniqueConstraints();
+    if (MapUtils.isNotEmpty(uniqueConstraints)) {
+      for (Map.Entry<String, List<UniqueConstraintCol>> entry : uniqueConstraints.entrySet()) {
+        getUniqueConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getUniqueConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<UniqueConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (UniqueConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getNotNullConstraintsInformation(StringBuilder constraintsInfo, NotNullConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, String> notNullConstraints = constraint.getNotNullConstraints();
+    if (MapUtils.isNotEmpty(notNullConstraints)) {
+      for (Map.Entry<String, String> entry : notNullConstraints.entrySet()) {
+        formatOutput("Constraint Name:", entry.getKey(), constraintsInfo);
+        formatOutput("Column Name:", entry.getValue(), constraintsInfo);
+        constraintsInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  private void getDefaultConstraintsInformation(StringBuilder constraintsInfo, DefaultConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<DefaultConstraintCol>> defaultConstraints = constraint.getDefaultConstraints();
+    if (MapUtils.isNotEmpty(defaultConstraints)) {
+      for (Map.Entry<String, List<DefaultConstraintCol>> entry : defaultConstraints.entrySet()) {
+        getDefaultConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getDefaultConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<DefaultConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (DefaultConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Default Value:" + column.defaultVal;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getCheckConstraintsInformation(StringBuilder constraintsInfo, CheckConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<CheckConstraintCol>> checkConstraints = constraint.getCheckConstraints();
+    if (MapUtils.isNotEmpty(checkConstraints)) {
+      for (Map.Entry<String, List<CheckConstraintCol>> entry : checkConstraints.entrySet()) {
+        getCheckConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getCheckConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<CheckConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (CheckConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Check Value:" + column.checkExpression;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void addExtendedTableData(DataOutputStream out, Table table, Partition partition) throws IOException {
+    if (partition != null) {
+      out.write(("Detailed Partition Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(partition.getTPartition().toString().getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    } else {
+      out.write(("Detailed Table Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      String tableDesc = HiveStringUtils.escapeJava(table.getTTable().toString());
+      out.write(tableDesc.getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    }
+  }
+
+  private void addExtendedConstraintData(DataOutputStream out, Table table)
+      throws IOException, UnsupportedEncodingException {
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      out.write(("Constraints").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+        out.write(table.getPrimaryKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+        out.write(table.getForeignKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+        out.write(table.getUniqueKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+        out.write(table.getNotNullConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+        out.write(table.getDefaultConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+        out.write(table.getCheckConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+    }

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);
+      formatOutput(new String[] {columnName}, constraintsInfo);
+    }
+  }
+
+  private void getForeignKeysInformation(StringBuilder constraintsInfo, ForeignKeyInfo constraint) {
+    formatOutput("Table:", constraint.getChildDatabaseName() + "." + constraint.getChildTableName(), constraintsInfo);
+    Map<String, List<ForeignKeyCol>> foreignKeys = constraint.getForeignKeys();
+    if (MapUtils.isNotEmpty(foreignKeys)) {
+      for (Map.Entry<String, List<ForeignKeyCol>> entry : foreignKeys.entrySet()) {
+        getForeignKeyRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getForeignKeyRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<ForeignKeyCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (ForeignKeyCol column : columns) {
+        String[] fields = new String[3];
+        fields[0] = "Parent Column Name:" +
+            column.parentDatabaseName + "."+ column.parentTableName + "." + column.parentColName;
+        fields[1] = "Column Name:" + column.childColName;
+        fields[2] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getUniqueConstraintsInformation(StringBuilder constraintsInfo, UniqueConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<UniqueConstraintCol>> uniqueConstraints = constraint.getUniqueConstraints();
+    if (MapUtils.isNotEmpty(uniqueConstraints)) {
+      for (Map.Entry<String, List<UniqueConstraintCol>> entry : uniqueConstraints.entrySet()) {
+        getUniqueConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getUniqueConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<UniqueConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (UniqueConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getNotNullConstraintsInformation(StringBuilder constraintsInfo, NotNullConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, String> notNullConstraints = constraint.getNotNullConstraints();
+    if (MapUtils.isNotEmpty(notNullConstraints)) {
+      for (Map.Entry<String, String> entry : notNullConstraints.entrySet()) {
+        formatOutput("Constraint Name:", entry.getKey(), constraintsInfo);
+        formatOutput("Column Name:", entry.getValue(), constraintsInfo);
+        constraintsInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  private void getDefaultConstraintsInformation(StringBuilder constraintsInfo, DefaultConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<DefaultConstraintCol>> defaultConstraints = constraint.getDefaultConstraints();
+    if (MapUtils.isNotEmpty(defaultConstraints)) {
+      for (Map.Entry<String, List<DefaultConstraintCol>> entry : defaultConstraints.entrySet()) {
+        getDefaultConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getDefaultConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<DefaultConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (DefaultConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Default Value:" + column.defaultVal;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getCheckConstraintsInformation(StringBuilder constraintsInfo, CheckConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<CheckConstraintCol>> checkConstraints = constraint.getCheckConstraints();
+    if (MapUtils.isNotEmpty(checkConstraints)) {
+      for (Map.Entry<String, List<CheckConstraintCol>> entry : checkConstraints.entrySet()) {
+        getCheckConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getCheckConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<CheckConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (CheckConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Check Value:" + column.checkExpression;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void addExtendedTableData(DataOutputStream out, Table table, Partition partition) throws IOException {
+    if (partition != null) {
+      out.write(("Detailed Partition Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(partition.getTPartition().toString().getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    } else {
+      out.write(("Detailed Table Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      String tableDesc = HiveStringUtils.escapeJava(table.getTTable().toString());
+      out.write(tableDesc.getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);

Review comment:
       `tableInfo.append(LINE_DELIM).apppend('#').......append(LINE_DELIM);`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);
+      formatOutput(new String[] {columnName}, constraintsInfo);
+    }
+  }
+
+  private void getForeignKeysInformation(StringBuilder constraintsInfo, ForeignKeyInfo constraint) {
+    formatOutput("Table:", constraint.getChildDatabaseName() + "." + constraint.getChildTableName(), constraintsInfo);
+    Map<String, List<ForeignKeyCol>> foreignKeys = constraint.getForeignKeys();
+    if (MapUtils.isNotEmpty(foreignKeys)) {
+      for (Map.Entry<String, List<ForeignKeyCol>> entry : foreignKeys.entrySet()) {
+        getForeignKeyRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getForeignKeyRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<ForeignKeyCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (ForeignKeyCol column : columns) {
+        String[] fields = new String[3];
+        fields[0] = "Parent Column Name:" +
+            column.parentDatabaseName + "."+ column.parentTableName + "." + column.parentColName;
+        fields[1] = "Column Name:" + column.childColName;
+        fields[2] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getUniqueConstraintsInformation(StringBuilder constraintsInfo, UniqueConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<UniqueConstraintCol>> uniqueConstraints = constraint.getUniqueConstraints();
+    if (MapUtils.isNotEmpty(uniqueConstraints)) {
+      for (Map.Entry<String, List<UniqueConstraintCol>> entry : uniqueConstraints.entrySet()) {
+        getUniqueConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getUniqueConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<UniqueConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (UniqueConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Key Sequence:" + column.position;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getNotNullConstraintsInformation(StringBuilder constraintsInfo, NotNullConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, String> notNullConstraints = constraint.getNotNullConstraints();
+    if (MapUtils.isNotEmpty(notNullConstraints)) {
+      for (Map.Entry<String, String> entry : notNullConstraints.entrySet()) {
+        formatOutput("Constraint Name:", entry.getKey(), constraintsInfo);
+        formatOutput("Column Name:", entry.getValue(), constraintsInfo);
+        constraintsInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  private void getDefaultConstraintsInformation(StringBuilder constraintsInfo, DefaultConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<DefaultConstraintCol>> defaultConstraints = constraint.getDefaultConstraints();
+    if (MapUtils.isNotEmpty(defaultConstraints)) {
+      for (Map.Entry<String, List<DefaultConstraintCol>> entry : defaultConstraints.entrySet()) {
+        getDefaultConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getDefaultConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<DefaultConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (DefaultConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Default Value:" + column.defaultVal;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void getCheckConstraintsInformation(StringBuilder constraintsInfo, CheckConstraint constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    Map<String, List<CheckConstraintCol>> checkConstraints = constraint.getCheckConstraints();
+    if (MapUtils.isNotEmpty(checkConstraints)) {
+      for (Map.Entry<String, List<CheckConstraintCol>> entry : checkConstraints.entrySet()) {
+        getCheckConstraintRelInformation(constraintsInfo, entry.getKey(), entry.getValue());
+      }
+    }
+  }
+
+  private void getCheckConstraintRelInformation(StringBuilder constraintsInfo, String constraintName,
+      List<CheckConstraintCol> columns) {
+    formatOutput("Constraint Name:", constraintName, constraintsInfo);
+    if (CollectionUtils.isNotEmpty(columns)) {
+      for (CheckConstraintCol column : columns) {
+        String[] fields = new String[2];
+        fields[0] = "Column Name:" + column.colName;
+        fields[1] = "Check Value:" + column.checkExpression;
+        formatOutput(fields, constraintsInfo);
+      }
+    }
+    constraintsInfo.append(LINE_DELIM);
+  }
+
+  private void addExtendedTableData(DataOutputStream out, Table table, Partition partition) throws IOException {
+    if (partition != null) {
+      out.write(("Detailed Partition Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(partition.getTPartition().toString().getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    } else {
+      out.write(("Detailed Table Information").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      String tableDesc = HiveStringUtils.escapeJava(table.getTTable().toString());
+      out.write(tableDesc.getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      out.write(Utilities.newLineCode); // comment column is empty
+    }
+  }
+
+  private void addExtendedConstraintData(DataOutputStream out, Table table)
+      throws IOException, UnsupportedEncodingException {
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      out.write(("Constraints").getBytes("UTF-8"));
+      out.write(Utilities.tabCode);
+      if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+        out.write(table.getPrimaryKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+        out.write(table.getForeignKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+        out.write(table.getUniqueKeyInfo().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+        out.write(table.getNotNullConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+        out.write(table.getDefaultConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+      if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+        out.write(table.getCheckConstraint().toString().getBytes("UTF-8"));
+        out.write(Utilities.newLineCode);
+      }
+    }
+  }
+
+  private void addExtendedStorageData(DataOutputStream out, Table table)
+      throws IOException, UnsupportedEncodingException {
+    if (table.getStorageHandlerInfo() != null) {
+      out.write(("StorageHandlerInfo").getBytes("UTF-8"));
+      out.write(Utilities.newLineCode);
+      out.write(table.getStorageHandlerInfo().formatAsText().getBytes("UTF-8"));
+      out.write(Utilities.newLineCode);

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);

Review comment:
       `.append().append().append()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);

Review comment:
       `.append().append().append()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);

Review comment:
       `.append().append()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);

Review comment:
       `.append().append()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);

Review comment:
       `.append().append()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/workloadmanagement/resourceplan/show/formatter/TextShowResourcePlanFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.workloadmanagement.resourceplan.show.formatter;
+
+import org.apache.hadoop.hive.metastore.api.WMFullResourcePlan;
+import org.apache.hadoop.hive.metastore.api.WMResourcePlan;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Formats SHOW RESOURCE PLAN(S) results to text format.
+ */
+class TextShowResourcePlanFormatter extends ShowResourcePlanFormatter {
+  @Override
+  public void showResourcePlans(DataOutputStream out, List<WMResourcePlan> resourcePlans) throws HiveException {
+    try {
+      for (WMResourcePlan plan : resourcePlans) {
+        out.write(plan.getName().getBytes(ShowUtils.UTF_8));
+        out.write(Utilities.tabCode);
+        out.write(plan.getStatus().name().getBytes(ShowUtils.UTF_8));
+        out.write(Utilities.tabCode);
+        String queryParallelism = plan.isSetQueryParallelism() ? Integer.toString(plan.getQueryParallelism()) : "null";
+        out.write(queryParallelism.getBytes(ShowUtils.UTF_8));
+        out.write(Utilities.tabCode);
+        String defaultPoolPath = plan.isSetDefaultPoolPath() ? plan.getDefaultPoolPath() : "null";
+        out.write(defaultPoolPath.getBytes(ShowUtils.UTF_8));

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {
+        fileData.maxFileSize = fileLength;
+      }
+      if (fileLength < fileData.minFileSize) {
+        fileData.minFileSize = fileLength;
+      }
+
+      if (entryStatus.getAccessTime() > fileData.lastAccessTime) {

Review comment:
       `Math.max`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/show/ShowPartitionsFormatter.java
##########
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.partition.show;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.hive.common.FileUtils;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW PARTITIONS results.
+ */
+abstract class ShowPartitionsFormatter {
+  static ShowPartitionsFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowPartitionsFormatter();
+    } else {
+      return new TextShowPartitionsFormatter();
+    }
+  }
+
+  abstract void showTablePartitions(DataOutputStream out, List<String> partitions) throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonShowPartitionsFormatter extends ShowPartitionsFormatter {
+    @Override
+    void showTablePartitions(DataOutputStream out, List<String> partitions) throws HiveException {
+      List<Map<String, Object>> partitionData = new ArrayList<>(partitions.size());
+      for (String partition : partitions) {
+        partitionData.add(makeOneTablePartition(partition));
+      }
+      ShowUtils.asJson(out, MapBuilder.create().put("partitions", partitionData).build());
+    }
+
+    // TODO: This seems like a very wrong implementation.
+    private Map<String, Object> makeOneTablePartition(String partition) {
+      List<Map<String, Object>> result = new ArrayList<>();
+
+      List<String> names = new ArrayList<String>();
+      for (String part : StringUtils.split(partition, "/")) {

Review comment:
       JDK `String.split`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {
+        fileData.maxFileSize = fileLength;
+      }
+      if (fileLength < fileData.minFileSize) {
+        fileData.minFileSize = fileLength;
+      }
+
+      if (entryStatus.getAccessTime() > fileData.lastAccessTime) {
+        fileData.lastAccessTime = entryStatus.getAccessTime();
+      }
+      if (entryStatus.getModificationTime() > fileData.lastUpdateTime) {

Review comment:
       `Math.max`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);

Review comment:
       `.append().append()`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesFormatter.java
##########
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.tables;
+
+import com.google.common.collect.ImmutableMap;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW TABLES results.
+ */
+public abstract class ShowTablesFormatter {
+  public static ShowTablesFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTablesFormatter();
+    } else {
+      return new TextShowTablesFormatter();
+    }
+  }
+
+  public abstract void showTables(DataOutputStream out, List<String> tables) throws HiveException;
+
+  abstract void showTablesExtended(DataOutputStream out, List<Table> tables) throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonShowTablesFormatter extends ShowTablesFormatter {
+    @Override
+    public void showTables(DataOutputStream out, List<String> tables) throws HiveException {
+      ShowUtils.asJson(out, MapBuilder.create().put("tables", tables).build());
+    }
+
+    @Override
+    void showTablesExtended(DataOutputStream out, List<Table> tables) throws HiveException {
+      if (tables.isEmpty()) {
+        return;
+      }
+
+      List<Map<String, Object>> tableDataList = new ArrayList<>();
+      for (Table table : tables) {
+        Map<String, Object> tableData = ImmutableMap.of(
+            "Table Name", table.getTableName(),
+            "Table Type", table.getTableType().toString());
+        tableDataList.add(tableData);
+      }
+
+      ShowUtils.asJson(out, ImmutableMap.of("tables", tableDataList));
+    }
+  }
+
+  static class TextShowTablesFormatter extends ShowTablesFormatter {
+    @Override
+    public void showTables(DataOutputStream out, List<String> tables) throws HiveException {
+      Iterator<String> iterTbls = tables.iterator();
+
+      try {
+        while (iterTbls.hasNext()) {
+          // create a row per table name
+          out.write(iterTbls.next().getBytes("UTF-8"));

Review comment:
       `StandardCharsets`

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();

Review comment:
       `intern` of a constant values is probably useless.  Please remove.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){

Review comment:
       Should be able to replace these methods easily with Lambdas...
   
   ```list.stream().sorted(...).collect(Collectors.toList());```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550805459



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924522



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/JsonDescTableFormatter.java
##########
@@ -0,0 +1,265 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+
+import java.io.DataOutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats DESC TABLE results to json format.
+ */
+public class JsonDescTableFormatter extends DescTableFormatter {
+  private static final String COLUMN_NAME = "name";
+  private static final String COLUMN_TYPE = "type";
+  private static final String COLUMN_COMMENT = "comment";
+  private static final String COLUMN_MIN = "min";
+  private static final String COLUMN_MAX = "max";
+  private static final String COLUMN_NUM_NULLS = "numNulls";
+  private static final String COLUMN_NUM_TRUES = "numTrues";
+  private static final String COLUMN_NUM_FALSES = "numFalses";
+  private static final String COLUMN_DISTINCT_COUNT = "distinctCount";
+  private static final String COLUMN_AVG_LENGTH = "avgColLen";
+  private static final String COLUMN_MAX_LENGTH = "maxColLen";
+
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    MapBuilder builder = MapBuilder.create();
+    builder.put("columns", createColumnsInfo(columns, columnStats));
+
+    if (isExtended) {
+      addExtendedInfo(table, partition, builder);
+    }
+
+    ShowUtils.asJson(out, builder.build());
+  }
+
+  public static List<Map<String, Object>> createColumnsInfo(List<FieldSchema> columns,
+      List<ColumnStatisticsObj> columnStatisticsList) {
+    List<Map<String, Object>> columnsInfo = new ArrayList<>();

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550925420



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();

Review comment:
       Removed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550929405



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550929728



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  private <T> List<T> sortList(List<T> list, Comparator<T> comparator) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    List<T> ret = new ArrayList<>(list);
+    Collections.sort(ret, comparator);
+    return ret;
+  }
+
+  private String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo) {
+    displayAllParameters(params, tableInfo, true, false);
+  }
+
+  private void displayAllParameters(Map<String, String> params, StringBuilder tableInfo, boolean escapeUnicode,
+      boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      if (key.equals(StatsSetupConst.NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value) : HiveStringUtils.escapeJava(value),
+          tableInfo, isOutputPadded);
+    }
+  }
+
+  private String getConstraintsInformation(Table table) {
+    StringBuilder constraintsInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    constraintsInfo.append(LINE_DELIM + "# Constraints" + LINE_DELIM);
+    if (PrimaryKeyInfo.isPrimaryKeyInfoNotEmpty(table.getPrimaryKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Primary Key" + LINE_DELIM);
+      getPrimaryKeyInformation(constraintsInfo, table.getPrimaryKeyInfo());
+    }
+    if (ForeignKeyInfo.isForeignKeyInfoNotEmpty(table.getForeignKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Foreign Keys" + LINE_DELIM);
+      getForeignKeysInformation(constraintsInfo, table.getForeignKeyInfo());
+    }
+    if (UniqueConstraint.isUniqueConstraintNotEmpty(table.getUniqueKeyInfo())) {
+      constraintsInfo.append(LINE_DELIM + "# Unique Constraints" + LINE_DELIM);
+      getUniqueConstraintsInformation(constraintsInfo, table.getUniqueKeyInfo());
+    }
+    if (NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Not Null Constraints" + LINE_DELIM);
+      getNotNullConstraintsInformation(constraintsInfo, table.getNotNullConstraint());
+    }
+    if (DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Default Constraints" + LINE_DELIM);
+      getDefaultConstraintsInformation(constraintsInfo, table.getDefaultConstraint());
+    }
+    if (CheckConstraint.isCheckConstraintNotEmpty(table.getCheckConstraint())) {
+      constraintsInfo.append(LINE_DELIM + "# Check Constraints" + LINE_DELIM);
+      getCheckConstraintsInformation(constraintsInfo, table.getCheckConstraint());
+    }
+    return constraintsInfo.toString();
+  }
+
+  private void getPrimaryKeyInformation(StringBuilder constraintsInfo, PrimaryKeyInfo constraint) {
+    formatOutput("Table:", constraint.getDatabaseName() + "." + constraint.getTableName(), constraintsInfo);
+    formatOutput("Constraint Name:", constraint.getConstraintName(), constraintsInfo);
+    Map<Integer, String> columnNames = constraint.getColNames();
+    String title = "Column Name:".intern();
+    for (String columnName : columnNames.values()) {
+      constraintsInfo.append(String.format("%-" + ALIGNMENT + "s", title) + FIELD_DELIM);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550925559



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/process/show/compactions/ShowCompactionsDesc.java
##########
@@ -32,7 +32,8 @@
   private static final long serialVersionUID = 1L;
 
   public static final String SCHEMA =
-      "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid,errormessage#" +
+      "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid," +
+      "errormessage#" +

Review comment:
       There is a 120 limit in the Hive checkstyle, and I'm trying to make all DDL codes checkstyle violation free. This patch is about making Show kind commands cleaner, that is why it is here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r552905387



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/lock/show/ShowDbLocksAnalyzer.java
##########
@@ -23,9 +23,6 @@
 import org.apache.hadoop.hive.ql.ddl.DDLWork;
 import org.apache.hadoop.hive.ql.exec.Task;
 import org.apache.hadoop.hive.ql.exec.TaskFactory;
-import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager;
-import org.apache.hadoop.hive.ql.lockmgr.LockException;
-import org.apache.hadoop.hive.ql.lockmgr.TxnManagerFactory;

Review comment:
       Removed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/add/AlterTableAddPartitionDesc.java
##########
@@ -234,7 +233,7 @@ public void setWriteId(long writeId) {
 
   @Override
   public String getFullTableName() {
-    return AcidUtils.getFullTableName(dbName,tableName);
+    return AcidUtils.getFullTableName(dbName, tableName);

Review comment:
       Removed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/skewed/AlterTableSetSkewedLocationAnalyzer.java
##########
@@ -20,7 +20,6 @@
 
 import java.net.URI;
 import java.net.URISyntaxException;
-import java.util.ArrayList;

Review comment:
       Removed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AlterViewAsAnalyzer.java
##########
@@ -84,7 +84,7 @@ private void validateCreateView(AlterViewAsDesc desc, SemanticAnalyzer analyzer)
 
     if (oldView == null) {
       String viewNotExistErrorMsg = "The following view does not exist: " + desc.getViewName();
-      throw new SemanticException( ErrorMsg.ALTER_VIEW_AS_SELECT_NOT_EXIST.getMsg(viewNotExistErrorMsg));
+      throw new SemanticException(ErrorMsg.ALTER_VIEW_AS_SELECT_NOT_EXIST.getMsg(viewNotExistErrorMsg));

Review comment:
       Removed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/MapBuilder.java
##########
@@ -17,51 +17,53 @@
  */
 package org.apache.hadoop.hive.ql.metadata.formatting;
 
-import java.util.HashMap;
 import java.util.LinkedHashMap;
 import java.util.Map;
 
 /**
  * Helper class to build Maps consumed by the JSON formatter.  Only
  * add non-null entries to the Map.
  */
-public class MapBuilder {
-    private Map<String, Object> map = new LinkedHashMap<String, Object>();
+public final class MapBuilder {
+  private Map<String, Object> map = new LinkedHashMap<String, Object>();
 
-    private MapBuilder() {}
+  private MapBuilder() {
+  }
 
-    public static MapBuilder create() {
-        return new MapBuilder();
-    }
+  public static MapBuilder create() {
+    return new MapBuilder();
+  }
 
-    public MapBuilder put(String name, Object val) {
-        if (val != null)
-            map.put(name, val);
-        return this;
+  public MapBuilder put(String name, Object val) {
+    if (val != null) {
+      map.put(name, val);
     }
+    return this;
+  }
 
-    public MapBuilder put(String name, boolean val) {
-        map.put(name, Boolean.valueOf(val));
-        return this;
-    }
+  public MapBuilder put(String name, boolean val) {
+    map.put(name, Boolean.valueOf(val));
+    return this;
+  }
 
-    public MapBuilder put(String name, int val) {
-        map.put(name, Integer.valueOf(val));
-        return this;
-    }
+  public MapBuilder put(String name, int val) {
+    map.put(name, Integer.valueOf(val));
+    return this;
+  }
 
-    public MapBuilder put(String name, long val) {
-        map.put(name, Long.valueOf(val));
-        return this;
-    }
+  public MapBuilder put(String name, long val) {
+    map.put(name, Long.valueOf(val));
+    return this;
+  }
 
-    public <T> MapBuilder put(String name, T val, boolean use) {
-        if (use)
-            put(name, val);
-        return this;
+  public <T> MapBuilder put(String name, T val, boolean use) {
+    if (use) {
+      put(name, val);
     }
+    return this;
+  }
 
-    public Map<String, Object> build() {
-        return map;
-    }
+  public Map<String, Object> build() {
+    return map;
+  }

Review comment:
       Removed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550923972



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550804352



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/show/ShowPartitionsFormatter.java
##########
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.partition.show;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.hive.common.FileUtils;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW PARTITIONS results.
+ */
+abstract class ShowPartitionsFormatter {
+  static ShowPartitionsFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowPartitionsFormatter();
+    } else {
+      return new TextShowPartitionsFormatter();
+    }
+  }
+
+  abstract void showTablePartitions(DataOutputStream out, List<String> partitions) throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonShowPartitionsFormatter extends ShowPartitionsFormatter {
+    @Override
+    void showTablePartitions(DataOutputStream out, List<String> partitions) throws HiveException {
+      List<Map<String, Object>> partitionData = new ArrayList<>(partitions.size());
+      for (String partition : partitions) {
+        partitionData.add(makeOneTablePartition(partition));
+      }
+      ShowUtils.asJson(out, MapBuilder.create().put("partitions", partitionData).build());
+    }
+
+    // TODO: This seems like a very wrong implementation.
+    private Map<String, Object> makeOneTablePartition(String partition) {
+      List<Map<String, Object>> result = new ArrayList<>();
+
+      List<String> names = new ArrayList<String>();
+      for (String part : StringUtils.split(partition, "/")) {

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on pull request #1756:
URL: https://github.com/apache/hive/pull/1756#issuecomment-753382628


   Thank you @belugabehr for your comments, please check out the modified PR.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924209



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550925339



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo);
+    }
+  }
+
+  private void getTableMetaDataInformation(StringBuilder tableInfo, Table table, boolean isOutputPadded) {
+    formatOutput("Database:", table.getDbName(), tableInfo);
+    formatOutput("OwnerType:", (table.getOwnerType() != null) ? table.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", table.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(table.getTTable().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(table.getTTable().getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(table.getRetention()), tableInfo);
+    
+    if (!table.isView()) {
+      formatOutput("Location:", table.getDataLocation().toString(), tableInfo);
+    }
+    formatOutput("Table Type:", table.getTableType().name(), tableInfo);
+
+    if (table.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:" + LINE_DELIM);
+      displayAllParameters(table.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  private void getPartitionMetaDataInformation(StringBuilder tableInfo, Partition partition) {
+    formatOutput("Partition Value:", partition.getValues().toString(), tableInfo);
+    formatOutput("Database:", partition.getTPartition().getDbName(), tableInfo);
+    formatOutput("Table:", partition.getTable().getTableName(), tableInfo);
+    formatOutput("CreateTime:", formatDate(partition.getTPartition().getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(partition.getTPartition().getLastAccessTime()), tableInfo);
+    formatOutput("Location:", partition.getLocation(), tableInfo);
+
+    if (partition.getTPartition().getParameters().size() > 0) {
+      tableInfo.append("Partition Parameters:" + LINE_DELIM);
+      displayAllParameters(partition.getTPartition().getParameters(), tableInfo);
+    }
+  }
+
+  private class VectorComparator<T extends Comparable<T>>  implements Comparator<List<T>>{
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+
+  private <T extends Comparable<T>> List<T> sortList(List<T> list){

Review comment:
       Fixed, nice catch!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924123



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {

Review comment:
       Fixed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {
+      fileData.lastUpdateTime = status.getModificationTime();
+    }
+
+    FileStatus[] entryStatuses = fileSystem.listStatus(status.getPath());
+    for (FileStatus entryStatus : entryStatuses) {
+      if (entryStatus.isDirectory()) {
+        processDir(entryStatus, fileSystem, fileData);
+        continue;
+      }
+
+      fileData.numOfFiles++;
+      if (entryStatus.isErasureCoded()) {
+        fileData.numOfErasureCodedFiles++;
+      }
+
+      long fileLength = entryStatus.getLen();
+      fileData.totalFileSize += fileLength;
+      if (fileLength > fileData.maxFileSize) {
+        fileData.maxFileSize = fileLength;
+      }
+      if (fileLength < fileData.minFileSize) {

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550804026



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {

Review comment:
       Fixed, thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] belugabehr commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
belugabehr commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r552889107



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/drop/AbstractDropPartitionAnalyzer.java
##########
@@ -20,25 +20,17 @@
 
 import java.util.ArrayList;
 import java.util.Collection;
-import java.util.HashMap;

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java
##########
@@ -68,7 +68,8 @@ public void analyzeInternal(ASTNode root) throws SemanticException {
     }
 
     Table table = getTable(tableName);
-    Map<Integer, List<ExprNodeGenericFuncDesc>> partitionSpecs = ParseUtils.getFullPartitionSpecs(root, table, conf, false);
+    Map<Integer, List<ExprNodeGenericFuncDesc>> partitionSpecs = ParseUtils.getFullPartitionSpecs(root, table, conf,
+        false);

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableOperation.java
##########
@@ -35,7 +35,6 @@
 import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
 import org.apache.hadoop.hive.ql.ddl.DDLUtils;
 import org.apache.hadoop.hive.ql.ddl.table.constraint.add.AlterTableAddConstraintOperation;
-import org.apache.hadoop.hive.ql.exec.repl.util.ReplUtils;

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/lock/show/ShowLocksAnalyzer.java
##########
@@ -26,9 +26,6 @@
 import org.apache.hadoop.hive.ql.ddl.DDLWork;
 import org.apache.hadoop.hive.ql.exec.Task;
 import org.apache.hadoop.hive.ql.exec.TaskFactory;
-import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager;
-import org.apache.hadoop.hive.ql.lockmgr.LockException;
-import org.apache.hadoop.hive.ql.lockmgr.TxnManagerFactory;

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/MapBuilder.java
##########
@@ -17,51 +17,53 @@
  */
 package org.apache.hadoop.hive.ql.metadata.formatting;
 
-import java.util.HashMap;
 import java.util.LinkedHashMap;
 import java.util.Map;
 
 /**
  * Helper class to build Maps consumed by the JSON formatter.  Only
  * add non-null entries to the Map.
  */
-public class MapBuilder {
-    private Map<String, Object> map = new LinkedHashMap<String, Object>();
+public final class MapBuilder {
+  private Map<String, Object> map = new LinkedHashMap<String, Object>();
 
-    private MapBuilder() {}
+  private MapBuilder() {
+  }
 
-    public static MapBuilder create() {
-        return new MapBuilder();
-    }
+  public static MapBuilder create() {
+    return new MapBuilder();
+  }
 
-    public MapBuilder put(String name, Object val) {
-        if (val != null)
-            map.put(name, val);
-        return this;
+  public MapBuilder put(String name, Object val) {
+    if (val != null) {
+      map.put(name, val);
     }
+    return this;
+  }
 
-    public MapBuilder put(String name, boolean val) {
-        map.put(name, Boolean.valueOf(val));
-        return this;
-    }
+  public MapBuilder put(String name, boolean val) {
+    map.put(name, Boolean.valueOf(val));
+    return this;
+  }
 
-    public MapBuilder put(String name, int val) {
-        map.put(name, Integer.valueOf(val));
-        return this;
-    }
+  public MapBuilder put(String name, int val) {
+    map.put(name, Integer.valueOf(val));
+    return this;
+  }
 
-    public MapBuilder put(String name, long val) {
-        map.put(name, Long.valueOf(val));
-        return this;
-    }
+  public MapBuilder put(String name, long val) {
+    map.put(name, Long.valueOf(val));
+    return this;
+  }
 
-    public <T> MapBuilder put(String name, T val, boolean use) {
-        if (use)
-            put(name, val);
-        return this;
+  public <T> MapBuilder put(String name, T val, boolean use) {
+    if (use) {
+      put(name, val);
     }
+    return this;
+  }
 
-    public Map<String, Object> build() {
-        return map;
-    }
+  public Map<String, Object> build() {
+    return map;
+  }

Review comment:
       All changes (there are several) in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/add/AlterTableAddPartitionDesc.java
##########
@@ -234,7 +233,7 @@ public void setWriteId(long writeId) {
 
   @Override
   public String getFullTableName() {
-    return AcidUtils.getFullTableName(dbName,tableName);
+    return AcidUtils.getFullTableName(dbName, tableName);

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/process/show/compactions/ShowCompactionsDesc.java
##########
@@ -31,9 +31,11 @@
 public class ShowCompactionsDesc implements DDLDesc, Serializable {
   private static final long serialVersionUID = 1L;
 
+  // @formatter:off
   public static final String SCHEMA =
       "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid,errormessage#" +
       "string:string:string:string:string:string:string:string:string:string:string:string:string";
+  // @formatter:on

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.
   
   I think this is probably an artifact of your IDE.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/rename/AlterTableRenamePartitionOperation.java
##########
@@ -26,7 +26,6 @@
 import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
 import org.apache.hadoop.hive.ql.ddl.DDLUtils;
 import org.apache.hadoop.hive.ql.ddl.table.AlterTableUtils;
-import org.apache.hadoop.hive.ql.exec.repl.util.ReplUtils;

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/lock/show/ShowDbLocksAnalyzer.java
##########
@@ -23,9 +23,6 @@
 import org.apache.hadoop.hive.ql.ddl.DDLWork;
 import org.apache.hadoop.hive.ql.exec.Task;
 import org.apache.hadoop.hive.ql.exec.TaskFactory;
-import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager;
-import org.apache.hadoop.hive.ql.lockmgr.LockException;
-import org.apache.hadoop.hive.ql.lockmgr.TxnManagerFactory;

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/add/AlterTableAddConstraintAnalyzer.java
##########
@@ -79,7 +79,8 @@ protected void analyzeCommand(TableName tableName, Map<String, String> partition
       throw new SemanticException(ErrorMsg.NOT_RECOGNIZED_CONSTRAINT.getMsg(constraintNode.getToken().getText()));
     }
 
-    Constraints constraints = new Constraints(primaryKeys, foreignKeys, null, uniqueConstraints, null, checkConstraints);
+    Constraints constraints =
+        new Constraints(primaryKeys, foreignKeys, null, uniqueConstraints, null, checkConstraints);

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/drop/AlterTableDropConstraintDesc.java
##########
@@ -83,5 +82,4 @@ public Long getWriteId() {
   public boolean mayNeedWriteId() {
     return true;
   }
-

Review comment:
       I believe these changes are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AlterViewAsAnalyzer.java
##########
@@ -84,7 +84,7 @@ private void validateCreateView(AlterViewAsDesc desc, SemanticAnalyzer analyzer)
 
     if (oldView == null) {
       String viewNotExistErrorMsg = "The following view does not exist: " + desc.getViewName();
-      throw new SemanticException( ErrorMsg.ALTER_VIEW_AS_SELECT_NOT_EXIST.getMsg(viewNotExistErrorMsg));
+      throw new SemanticException(ErrorMsg.ALTER_VIEW_AS_SELECT_NOT_EXIST.getMsg(viewNotExistErrorMsg));

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/skewed/AlterTableSetSkewedLocationAnalyzer.java
##########
@@ -20,7 +20,6 @@
 
 import java.net.URI;
 import java.net.URISyntaxException;
-import java.util.ArrayList;

Review comment:
       All changes in this file are out of scope of the PR.  Please revert.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924379



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on pull request #1756:
URL: https://github.com/apache/hive/pull/1756#issuecomment-753538962


   @belugabehr I've fixed most of the issues that you've mentioned. I was mainly focused on putting the code to it's proper location, in a manageable structure, and it was great that you've looked at the code thoroughly, finding these suboptimal solutions, thank you for that.Regarding the append().append() like issues:


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550928737



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");
+  }
+
+  public static void writeToFile(String data, String file, DDLOperationContext context) throws IOException {
+    if (StringUtils.isEmpty(data)) {
+      return;
+    }
+  
+    Path resFile = new Path(file);
+    FileSystem fs = resFile.getFileSystem(context.getConf());
+    try (FSDataOutputStream out = fs.create(resFile);
+         OutputStreamWriter writer = new OutputStreamWriter(out, "UTF-8")) {
+      writer.write(data);
+      writer.write((char) Utilities.newLineCode);
+      writer.flush();
+    }
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value) {
+    appendNonNull(builder, value, false);
+  }
+
+  public static void appendNonNull(StringBuilder builder, Object value, boolean firstColumn) {
+    if (!firstColumn) {
+      builder.append((char)Utilities.tabCode);
+    } else if (builder.length() > 0) {
+      builder.append((char)Utilities.newLineCode);
+    }
+    if (value != null) {
+      builder.append(value);
+    }
+  }
+
+
+  public static String[] extractColumnValues(FieldSchema column, boolean isColumnStatsAvailable,
+      ColumnStatisticsObj columnStatisticsObj) {
+    List<String> values = new ArrayList<>();
+    values.add(column.getName());
+    values.add(column.getType());
+
+    if (isColumnStatsAvailable) {
+      if (columnStatisticsObj != null) {
+        ColumnStatisticsData statsData = columnStatisticsObj.getStatsData();
+        if (statsData.isSetBinaryStats()) {
+          BinaryColumnStatsData binaryStats = statsData.getBinaryStats();
+          values.addAll(Lists.newArrayList("", "", "" + binaryStats.getNumNulls(), "",
+              "" + binaryStats.getAvgColLen(), "" + binaryStats.getMaxColLen(), "", "",
+              convertToString(binaryStats.getBitVectors())));
+        } else if (statsData.isSetStringStats()) {
+          StringColumnStatsData stringStats = statsData.getStringStats();
+          values.addAll(Lists.newArrayList("", "", "" + stringStats.getNumNulls(), "" + stringStats.getNumDVs(),
+              "" + stringStats.getAvgColLen(), "" + stringStats.getMaxColLen(), "", "",
+              convertToString(stringStats.getBitVectors())));
+        } else if (statsData.isSetBooleanStats()) {
+          BooleanColumnStatsData booleanStats = statsData.getBooleanStats();
+          values.addAll(Lists.newArrayList("", "", "" + booleanStats.getNumNulls(), "", "", "",
+              "" + booleanStats.getNumTrues(), "" + booleanStats.getNumFalses(),
+              convertToString(booleanStats.getBitVectors())));
+        } else if (statsData.isSetDecimalStats()) {
+          DecimalColumnStatsData decimalStats = statsData.getDecimalStats();
+          values.addAll(Lists.newArrayList(convertToString(decimalStats.getLowValue()),
+              convertToString(decimalStats.getHighValue()), "" + decimalStats.getNumNulls(),
+              "" + decimalStats.getNumDVs(), "", "", "", "", convertToString(decimalStats.getBitVectors())));
+        } else if (statsData.isSetDoubleStats()) {
+          DoubleColumnStatsData doubleStats = statsData.getDoubleStats();
+          values.addAll(Lists.newArrayList("" + doubleStats.getLowValue(), "" + doubleStats.getHighValue(),
+              "" + doubleStats.getNumNulls(), "" + doubleStats.getNumDVs(), "", "", "", "",
+              convertToString(doubleStats.getBitVectors())));
+        } else if (statsData.isSetLongStats()) {
+          LongColumnStatsData longStats = statsData.getLongStats();
+          values.addAll(Lists.newArrayList("" + longStats.getLowValue(), "" + longStats.getHighValue(),
+              "" + longStats.getNumNulls(), "" + longStats.getNumDVs(), "", "", "", "",
+              convertToString(longStats.getBitVectors())));
+        } else if (statsData.isSetDateStats()) {
+          DateColumnStatsData dateStats = statsData.getDateStats();
+          values.addAll(Lists.newArrayList(convertToString(dateStats.getLowValue()),
+              convertToString(dateStats.getHighValue()), "" + dateStats.getNumNulls(), "" + dateStats.getNumDVs(),
+              "", "", "", "", convertToString(dateStats.getBitVectors())));
+        } else if (statsData.isSetTimestampStats()) {
+          TimestampColumnStatsData timestampStats = statsData.getTimestampStats();
+          values.addAll(Lists.newArrayList(convertToString(timestampStats.getLowValue()),
+              convertToString(timestampStats.getHighValue()), "" + timestampStats.getNumNulls(),
+              "" + timestampStats.getNumDVs(), "", "", "", "", convertToString(timestampStats.getBitVectors())));
+        }
+      } else {
+        values.addAll(Lists.newArrayList("", "", "", "", "", "", "", "", ""));
+      }
+    }
+
+    values.add(column.getComment() != null ? column.getComment() : "");
+    return values.toArray(new String[] {});
+  }
+
+  public static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result = HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    return (result != null) ? result.toString() : "";
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritableV2 writableValue = new DateWritableV2((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static String convertToString(byte[] buffer) {
+    if (buffer == null || buffer.length == 0) {
+      return "";
+    }
+    return new String(Arrays.copyOfRange(buffer, 0, 2));
+  }
+
+  public static String convertToString(org.apache.hadoop.hive.metastore.api.Timestamp val) {
+    if (val == null) {
+      return "";
+    }
+
+    TimestampWritableV2 writableValue = new TimestampWritableV2(Timestamp.ofEpochSecond(val.getSecondsSinceEpoch()));
+    return writableValue.toString();
+  }
+
+  /**
+   * Convert the map to a JSON string.
+   */
+  public static void asJson(OutputStream out, Map<String, Object> data) throws HiveException {
+    try {
+      new ObjectMapper().writeValue(out, data);
+    } catch (IOException e) {
+      throw new HiveException("Unable to convert to json", e);
+    }
+  }
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  public static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  public static final int ALIGNMENT = 20;
+
+  /**
+   * Prints a row with the given fields into the builder.
+   * The last field could be a multiline field, and the extra lines should be padded.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if contains newlines?
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo, boolean isLastLinePadded,
+      boolean isFormatted) {
+    if (!isFormatted) {
+      for (int i = 0; i < fields.length; i++) {
+        Object value = HiveStringUtils.escapeJava(fields[i]);
+        if (value != null) {
+          tableInfo.append(value);
+        }
+        tableInfo.append((i == fields.length - 1) ? LINE_DELIM : FIELD_DELIM);
+      }
+    } else {
+      int[] paddings = new int[fields.length - 1];
+      if (fields.length > 1) {
+        for (int i = 0; i < fields.length - 1; i++) {
+          if (fields[i] == null) {
+            tableInfo.append(FIELD_DELIM);
+            continue;
+          }
+          tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i])).append(FIELD_DELIM);
+          paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+        }
+      }
+      if (fields.length > 0) {
+        String value = fields[fields.length - 1];
+        String unescapedValue = (isLastLinePadded && value != null) ?
+            value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+        indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+      } else {
+        tableInfo.append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Prints a row the given fields to a formatted line.
+   * 
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String[] fields, StringBuilder tableInfo) {
+    formatOutput(fields, tableInfo, false, true);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it adds one more empty field
+   * before the two values (Assumes, the name value pair is already indented with it).
+   * 
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();
+    indentMultilineValue(value, tableInfo, new int[] {0, colNameLength}, true);
+  }
+
+  /**
+   * Prints the name value pair
+   * If the output is padded then unescape the value, so it could be printed in multiple lines.
+   * In this case it assumes the pair is already indented with a field delimiter
+   * 
+   * @param name The field name to print
+   * @param value The value t print
+   * @param tableInfo The target builder
+   * @param isOutputPadded Should the value printed as a padded string?
+   */
+  public static void formatOutput(String name, String value, StringBuilder tableInfo, boolean isOutputPadded) {
+    String unescapedValue = (isOutputPadded && value != null) ?
+        value.replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+    formatOutput(name, unescapedValue, tableInfo);
+  }
+
+  /**
+   * Indent processing for multi-line values.
+   * Values should be indented the same amount on each line.
+   * If the first line comment starts indented by k, the following line comments should also be indented by k.
+   * 
+   * @param value the value to write
+   * @param tableInfo the buffer to write to
+   * @param columnWidths the widths of the previous columns
+   * @param printNull print null as a string, or do not print anything
+   */
+  private static void indentMultilineValue(String value, StringBuilder tableInfo, int[] columnWidths,
+      boolean printNull) {
+    if (value == null) {
+      if (printNull) {
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", value));
+      }
+      tableInfo.append(LINE_DELIM);
+    } else {
+      String[] valueSegments = value.split("\n|\r|\r\n");
+      tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[0])).append(LINE_DELIM);
+      for (int i = 1; i < valueSegments.length; i++) {
+        printPadding(tableInfo, columnWidths);
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[i])).append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Print the rigth padding, with the given column widths.
+   * 
+   * @param tableInfo The buffer to write to
+   * @param columnWidths The column widths
+   */
+  private static void printPadding(StringBuilder tableInfo, int[] columnWidths) {
+    for (int columnWidth : columnWidths) {
+      if (columnWidth == 0) {
+        tableInfo.append(FIELD_DELIM);
+      } else {
+        tableInfo.append(String.format("%" + columnWidth + "s" + FIELD_DELIM, ""));
+      }
+    }
+  }
+
+  /**
+   * Helps to format tables in SHOW ... command outputs.
+   */
+  public static class TextMetaDataTable {
+    private List<List<String>> table = new ArrayList<>();
+
+    public void addRow(String... values) {
+      table.add(Lists.<String> newArrayList(values));
+    }
+
+    public String renderTable(boolean isOutputPadded) {
+      StringBuilder stringBuilder = new StringBuilder();
+      for (List<String> row : table) {
+        formatOutput(row.toArray(new String[] {}), stringBuilder, isOutputPadded, isOutputPadded);
+      }
+      return stringBuilder.toString();
+    }
+
+    public void transpose() {
+      if (table.size() == 0) {
+        return;
+      }
+      List<List<String>> newTable = new ArrayList<List<String>>();
+      for (int i = 0; i < table.get(0).size(); i++) {
+        newTable.add(new ArrayList<>());
+      }
+      for (List<String> sourceRow : table) {
+        if (newTable.size() != sourceRow.size()) {
+          throw new RuntimeException("invalid table size");
+        }
+        for (int i = 0; i < sourceRow.size(); i++) {
+          newTable.get(i).add(sourceRow.get(i));
+        }
+      }
+      table = newTable;
+    }

Review comment:
       I don't think it is possible in any other way. Transposing a table must be done like this in essence.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550924109



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java
##########
@@ -0,0 +1,193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.status.formatter;
+
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.metadata.Hive;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Formats SHOW TABLE STATUS results.
+ */
+public abstract class ShowTableStatusFormatter {
+  private static final Logger LOG = LoggerFactory.getLogger(ShowTableStatusFormatter.class);
+
+  public static ShowTableStatusFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTableStatusFormatter();
+    } else {
+      return new TextShowTableStatusFormatter();
+    }
+  }
+
+  public abstract void showTableStatus(DataOutputStream out, Hive db, HiveConf conf, List<Table> tables, Partition par)
+      throws HiveException;
+
+  StorageInfo getStorageInfo(Table table, Partition partition) throws HiveException {
+    String location = null;
+    String inputFormatClass = null;
+    String outputFormatClass = null;
+    if (partition != null) {
+      if (partition.getLocation() != null) {
+        location = partition.getDataLocation().toString();
+      }
+      inputFormatClass = partition.getInputFormatClass() == null ? null : partition.getInputFormatClass().getName();
+      outputFormatClass = partition.getOutputFormatClass() == null ? null : partition.getOutputFormatClass().getName();
+    } else {
+      if (table.getPath() != null) {
+        location = table.getDataLocation().toString();
+      }
+      inputFormatClass = table.getInputFormatClass() == null ? null : table.getInputFormatClass().getName();
+      outputFormatClass = table.getOutputFormatClass() == null ? null : table.getOutputFormatClass().getName();
+    }
+
+    return new StorageInfo(location, inputFormatClass, outputFormatClass);
+  }
+
+  final static class StorageInfo {
+    final String location;
+    final String inputFormatClass;
+    final String outputFormatClass;
+
+    private StorageInfo(String location, String inputFormatClass, String outputFormatClass) {
+      this.location = location;
+      this.inputFormatClass = inputFormatClass;
+      this.outputFormatClass = outputFormatClass;
+    }
+  }
+
+  List<Path> getLocations(Hive db, Partition partition, Table table) throws HiveException {
+    List<Path> locations = new ArrayList<Path>();
+    if (table.isPartitioned()) {
+      if (partition == null) {
+        for (Partition currPartition : db.getPartitions(table)) {
+          if (currPartition.getLocation() != null) {
+            locations.add(new Path(currPartition.getLocation()));
+          }
+        }
+      } else {
+        if (partition.getLocation() != null) {
+          locations.add(new Path(partition.getLocation()));
+        }
+      }
+    } else {
+      if (table.getPath() != null) {
+        locations.add(table.getPath());
+      }
+    }
+    return locations;
+  }
+
+  FileData getFileData(HiveConf conf, List<Path> locations, Path tablePath) throws IOException {
+    FileData fileData = new FileData();
+    FileSystem fileSystem = tablePath.getFileSystem(conf);
+    // in case all files in locations do not exist
+    try {
+      FileStatus tmpStatus = fileSystem.getFileStatus(tablePath);
+      fileData.lastAccessTime = tmpStatus.getAccessTime();
+      fileData.lastUpdateTime = tmpStatus.getModificationTime();
+    } catch (IOException e) {
+      LOG.warn("Cannot access File System. File System status will be unknown: ", e);
+      fileData.unknown = true;
+    }
+
+    if (!fileData.unknown) {
+      for (Path location : locations) {
+        try {
+          FileStatus status = fileSystem.getFileStatus(location);
+          // no matter loc is the table location or part location, it must be a
+          // directory.
+          if (!status.isDirectory()) {
+            continue;
+          }
+          processDir(status, fileSystem, fileData);
+        } catch (IOException e) {
+          // ignore
+        }
+      }
+    }
+    return fileData;
+  }
+
+  private void processDir(FileStatus status, FileSystem fileSystem, FileData fileData) throws IOException {
+    if (status.getAccessTime() > fileData.lastAccessTime) {
+      fileData.lastAccessTime = status.getAccessTime();
+    }
+    if (status.getModificationTime() > fileData.lastUpdateTime) {

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r552905134



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/process/show/compactions/ShowCompactionsDesc.java
##########
@@ -31,9 +31,11 @@
 public class ShowCompactionsDesc implements DDLDesc, Serializable {
   private static final long serialVersionUID = 1L;
 
+  // @formatter:off
   public static final String SCHEMA =
       "compactionid,dbname,tabname,partname,type,state,hostname,workerid,enqueuetime,starttime,duration,hadoopjobid,errormessage#" +
       "string:string:string:string:string:string:string:string:string:string:string:string:string";
+  // @formatter:on

Review comment:
       Removed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java
##########
@@ -68,7 +68,8 @@ public void analyzeInternal(ASTNode root) throws SemanticException {
     }
 
     Table table = getTable(tableName);
-    Map<Integer, List<ExprNodeGenericFuncDesc>> partitionSpecs = ParseUtils.getFullPartitionSpecs(root, table, conf, false);
+    Map<Integer, List<ExprNodeGenericFuncDesc>> partitionSpecs = ParseUtils.getFullPartitionSpecs(root, table, conf,
+        false);

Review comment:
       Removed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/add/AlterTableAddConstraintAnalyzer.java
##########
@@ -79,7 +79,8 @@ protected void analyzeCommand(TableName tableName, Map<String, String> partition
       throw new SemanticException(ErrorMsg.NOT_RECOGNIZED_CONSTRAINT.getMsg(constraintNode.getToken().getText()));
     }
 
-    Constraints constraints = new Constraints(primaryKeys, foreignKeys, null, uniqueConstraints, null, checkConstraints);
+    Constraints constraints =
+        new Constraints(primaryKeys, foreignKeys, null, uniqueConstraints, null, checkConstraints);

Review comment:
       Removed.

##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableOperation.java
##########
@@ -35,7 +35,6 @@
 import org.apache.hadoop.hive.ql.ddl.DDLOperationContext;
 import org.apache.hadoop.hive.ql.ddl.DDLUtils;
 import org.apache.hadoop.hive.ql.ddl.table.constraint.add.AlterTableAddConstraintOperation;
-import org.apache.hadoop.hive.ql.exec.repl.util.ReplUtils;

Review comment:
       Removed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550804441



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesFormatter.java
##########
@@ -0,0 +1,119 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.show.tables;
+
+import com.google.common.collect.ImmutableMap;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.formatting.MapBuilder;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Formats SHOW TABLES results.
+ */
+public abstract class ShowTablesFormatter {
+  public static ShowTablesFormatter getFormatter(HiveConf conf) {
+    if (MetaDataFormatUtils.isJson(conf)) {
+      return new JsonShowTablesFormatter();
+    } else {
+      return new TextShowTablesFormatter();
+    }
+  }
+
+  public abstract void showTables(DataOutputStream out, List<String> tables) throws HiveException;
+
+  abstract void showTablesExtended(DataOutputStream out, List<Table> tables) throws HiveException;
+
+  // ------ Implementations ------
+
+  static class JsonShowTablesFormatter extends ShowTablesFormatter {
+    @Override
+    public void showTables(DataOutputStream out, List<String> tables) throws HiveException {
+      ShowUtils.asJson(out, MapBuilder.create().put("tables", tables).build());
+    }
+
+    @Override
+    void showTablesExtended(DataOutputStream out, List<Table> tables) throws HiveException {
+      if (tables.isEmpty()) {
+        return;
+      }
+
+      List<Map<String, Object>> tableDataList = new ArrayList<>();
+      for (Table table : tables) {
+        Map<String, Object> tableData = ImmutableMap.of(
+            "Table Name", table.getTableName(),
+            "Table Type", table.getTableType().toString());
+        tableDataList.add(tableData);
+      }
+
+      ShowUtils.asJson(out, ImmutableMap.of("tables", tableDataList));
+    }
+  }
+
+  static class TextShowTablesFormatter extends ShowTablesFormatter {
+    @Override
+    public void showTables(DataOutputStream out, List<String> tables) throws HiveException {
+      Iterator<String> iterTbls = tables.iterator();
+
+      try {
+        while (iterTbls.hasNext()) {
+          // create a row per table name
+          out.write(iterTbls.next().getBytes("UTF-8"));

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550929543



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java
##########
@@ -0,0 +1,575 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl.table.info.desc.formatter;
+
+import org.apache.commons.collections4.CollectionUtils;
+import org.apache.commons.collections4.MapUtils;
+import org.apache.commons.text.StringEscapeUtils;
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils;
+import org.apache.hadoop.hive.ql.ddl.ShowUtils.TextMetaDataTable;
+import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint;
+import org.apache.hadoop.hive.ql.metadata.CheckConstraint.CheckConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint;
+import org.apache.hadoop.hive.ql.metadata.DefaultConstraint.DefaultConstraintCol;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.ql.metadata.NotNullConstraint;
+import org.apache.hadoop.hive.ql.metadata.Partition;
+import org.apache.hadoop.hive.ql.metadata.PrimaryKeyInfo;
+import org.apache.hadoop.hive.ql.metadata.Table;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint;
+import org.apache.hadoop.hive.ql.metadata.ForeignKeyInfo.ForeignKeyCol;
+import org.apache.hadoop.hive.ql.metadata.UniqueConstraint.UniqueConstraintCol;
+import org.apache.hadoop.hive.ql.plan.PlanUtils;
+import org.apache.hadoop.hive.ql.session.SessionState;
+import org.apache.hive.common.util.HiveStringUtils;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.Map.Entry;
+
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.ALIGNMENT;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.DEFAULT_STRINGBUILDER_SIZE;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.FIELD_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.LINE_DELIM;
+import static org.apache.hadoop.hive.ql.ddl.ShowUtils.formatOutput;
+
+/**
+ * Formats DESC TABLE results to text format.
+ */
+class TextDescTableFormatter extends DescTableFormatter {
+  @Override
+  public void describeTable(HiveConf conf, DataOutputStream out, String columnPath, String tableName, Table table,
+      Partition partition, List<FieldSchema> columns, boolean isFormatted, boolean isExtended, boolean isOutputPadded,
+      List<ColumnStatisticsObj> columnStats) throws HiveException {
+    try {
+      addStatsData(out, columnPath, columns, isFormatted, columnStats, isOutputPadded);
+      addPartitionData(out, conf, columnPath, table, isFormatted, isOutputPadded);
+
+      if (columnPath == null) {
+        if (isFormatted) {
+          addFormattedTableData(out, table, partition, isOutputPadded);
+        }
+
+        if (isExtended) {
+          out.write(Utilities.newLineCode);
+          addExtendedTableData(out, table, partition);
+          addExtendedConstraintData(out, table);
+          addExtendedStorageData(out, table);
+        }
+      }
+    } catch (IOException e) {
+      throw new HiveException(e);
+    }
+  }
+
+  private void addStatsData(DataOutputStream out, String columnPath, List<FieldSchema> columns, boolean isFormatted,
+      List<ColumnStatisticsObj> columnStats, boolean isOutputPadded) throws IOException {
+    String statsData = "";
+    
+    TextMetaDataTable metaDataTable = new TextMetaDataTable();
+    boolean needColStats = isFormatted && columnPath != null;
+    if (needColStats) {
+      metaDataTable.addRow(DescTableDesc.COLUMN_STATISTICS_HEADERS.toArray(new String[]{}));
+    } else if (isFormatted && !SessionState.get().isHiveServerQuery()) {
+      statsData += "# ";
+      metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+    }
+    for (FieldSchema column : columns) {
+      metaDataTable.addRow(ShowUtils.extractColumnValues(column, needColStats,
+          getColumnStatisticsObject(column.getName(), column.getType(), columnStats)));
+    }
+    if (needColStats) {
+      metaDataTable.transpose();
+    }
+    statsData += metaDataTable.renderTable(isOutputPadded);
+    out.write(statsData.getBytes("UTF-8"));
+  }
+
+  private ColumnStatisticsObj getColumnStatisticsObject(String columnName, String columnType,
+      List<ColumnStatisticsObj> columnStats) {
+    if (CollectionUtils.isNotEmpty(columnStats)) {
+      for (ColumnStatisticsObj columnStat : columnStats) {
+        if (columnStat.getColName().equalsIgnoreCase(columnName) &&
+            columnStat.getColType().equalsIgnoreCase(columnType)) {
+          return columnStat;
+        }
+      }
+    }
+    return null;
+  }
+
+  private void addPartitionData(DataOutputStream out, HiveConf conf, String columnPath, Table table,
+      boolean isFormatted, boolean isOutputPadded) throws IOException {
+    String partitionData = "";
+    if (columnPath == null) {
+      List<FieldSchema> partitionColumns = table.isPartitioned() ? table.getPartCols() : null;
+      if (CollectionUtils.isNotEmpty(partitionColumns) &&
+          conf.getBoolVar(ConfVars.HIVE_DISPLAY_PARTITION_COLUMNS_SEPARATELY)) {
+        TextMetaDataTable metaDataTable = new TextMetaDataTable();
+        partitionData += LINE_DELIM + "# Partition Information" + LINE_DELIM + "# ";
+        metaDataTable.addRow(DescTableDesc.SCHEMA.split("#")[0].split(","));
+        for (FieldSchema partitionColumn : partitionColumns) {
+          metaDataTable.addRow(ShowUtils.extractColumnValues(partitionColumn, false, null));
+        }
+        partitionData += metaDataTable.renderTable(isOutputPadded);
+      }
+    } else {
+      String statsState = table.getParameters().get(StatsSetupConst.COLUMN_STATS_ACCURATE);
+      if (table.getParameters() != null && statsState != null) {
+        StringBuilder stringBuilder = new StringBuilder();
+        formatOutput(StatsSetupConst.COLUMN_STATS_ACCURATE,
+            isFormatted ? StringEscapeUtils.escapeJava(statsState) : HiveStringUtils.escapeJava(statsState),
+            stringBuilder, isOutputPadded);
+        partitionData += stringBuilder.toString();
+      }
+    }
+    out.write(partitionData.getBytes("UTF-8"));
+  }
+
+  private void addFormattedTableData(DataOutputStream out, Table table, Partition partition, boolean isOutputPadded)
+      throws IOException, UnsupportedEncodingException {
+    String formattedTableInfo = null;
+    if (partition != null) {
+      formattedTableInfo = getPartitionInformation(partition);
+    } else {
+      formattedTableInfo = getTableInformation(table, isOutputPadded);
+    }
+
+    if (table.getTableConstraintsInfo().isTableConstraintsInfoNotEmpty()) {
+      formattedTableInfo += getConstraintsInformation(table);
+    }
+    out.write(formattedTableInfo.getBytes("UTF-8"));
+  }
+
+  private String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information").append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getTTable().getSd());
+
+    if (table.isView() || table.isMaterializedView()) {
+      tableInfo.append(LINE_DELIM + "# " + (table.isView() ? "" : "Materialized ") + "View Information" + LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private String getPartitionInformation(Partition partition) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    tableInfo.append(LINE_DELIM + "# Detailed Partition Information" + LINE_DELIM);
+    getPartitionMetaDataInformation(tableInfo, partition);
+
+    if (partition.getTable().getTableType() != TableType.VIRTUAL_VIEW) {
+      tableInfo.append(LINE_DELIM + "# Storage Information" + LINE_DELIM);
+      getStorageDescriptorInfo(tableInfo, partition.getTPartition().getSd());
+    }
+
+    return tableInfo.toString();
+  }
+
+  private void getViewInfo(StringBuilder tableInfo, Table table) {
+    formatOutput("Original Query:", table.getViewOriginalText(), tableInfo);
+    formatOutput("Expanded Query:", table.getViewExpandedText(), tableInfo);
+    if (table.isMaterializedView()) {
+      formatOutput("Rewrite Enabled:", table.isRewriteEnabled() ? "Yes" : "No", tableInfo);
+      formatOutput("Outdated for Rewriting:", table.isOutdatedForRewriting() == null ? "Unknown"
+          : table.isOutdatedForRewriting() ? "Yes" : "No", tableInfo);
+    }
+  }
+
+  private void getStorageDescriptorInfo(StringBuilder tableInfo, StorageDescriptor storageDesc) {
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(), tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+
+    if (storageDesc.isStoredAsSubDirectories()) {
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (storageDesc.getSkewedInfo() != null) {
+      List<String> skewedColNames = sortList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues = sortList(
+          storageDesc.getSkewedInfo().getSkewedColValues(), new VectorComparator<String>());
+      if (CollectionUtils.isNotEmpty(skewedColValues)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap = new TreeMap<>(new VectorComparator<String>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if (MapUtils.isNotEmpty(skewedColMap)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), PlanUtils.removePrefixFromWarehouseConfig(entry.getValue()));
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(), tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:" + LINE_DELIM);

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org


[GitHub] [hive] miklosgergely commented on a change in pull request #1756: HIVE-24509 Move show specific codes under DDL and cut MetaDataFormatter classes to pieces

Posted by GitBox <gi...@apache.org>.
miklosgergely commented on a change in pull request #1756:
URL: https://github.com/apache/hive/pull/1756#discussion_r550804652



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java
##########
@@ -0,0 +1,397 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.ddl;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.common.type.Timestamp;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.TimestampColumnStatsData;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.io.DateWritableV2;
+import org.apache.hadoop.hive.serde2.io.TimestampWritableV2;
+import org.apache.hive.common.util.HiveStringUtils;
+import org.codehaus.jackson.map.ObjectMapper;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.OutputStreamWriter;
+import java.math.BigInteger;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+/**
+ * Utilities for SHOW ... commands.
+ */
+public final class ShowUtils {
+  private ShowUtils() {
+    throw new UnsupportedOperationException("ShowUtils should not be instantiated");
+  }
+
+  public static final Charset UTF_8 = Charset.forName("UTF-8");
+
+  public static DataOutputStream getOutputStream(Path outputFile, DDLOperationContext context) throws HiveException {
+    try {
+      FileSystem fs = outputFile.getFileSystem(context.getConf());
+      return fs.create(outputFile);
+    } catch (Exception e) {
+      throw new HiveException(e);
+    }
+  }
+
+  public static String propertiesToString(Map<String, String> props, Set<String> exclude) {
+    if (props.isEmpty()) {
+      return "";
+    }
+  
+    SortedMap<String, String> sortedProperties = new TreeMap<String, String>(props);
+    List<String> realProps = new ArrayList<String>();
+    for (Map.Entry<String, String> e : sortedProperties.entrySet()) {
+      if (e.getValue() != null && (exclude == null || !exclude.contains(e.getKey()))) {
+        realProps.add("  '" + e.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(e.getValue()) + "'");
+      }
+    }
+    return StringUtils.join(realProps, ", \n");

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: gitbox-unsubscribe@hive.apache.org
For additional commands, e-mail: gitbox-help@hive.apache.org