You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by vo...@apache.org on 2019/12/04 12:55:54 UTC

[drill] branch master updated (6c80a46 -> 51df52d)

This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git.


    from 6c80a46  Update Slack Link in README.md
     new ba601b0  DRILL-7393: Revisit Drill tests to ensure that patching is executed before any test run
     new f3e32c3  DRILL-6540: Upgrade to HADOOP-3.0.3 libraries
     new db64882  DRILL-6540: Updated Hadoop and HBase libraries to the latest versions
     new de41559  DRILL-5844: Incorrect values of TABLE_TYPE returned from method DatabaseMetaData.getTables of JDBC API
     new 20293b6  DRILL-7450: Improve performance for ANALYZE command
     new 93a39cd  DRILL-7324: Final set of "batch count" fixes
     new 364a4d3  DRILL-7463: Apache license is not added to the generated classes
     new 5655dbb  DRILL-7208: Reuse root git.properties file
     new 2b9a25c  DRILL-6904: Update maven-javadoc-plugin, maven-compiler-plugin and maven-assembly-plugin to the latest version
     new 086ecbd  DRILL-7221: Exclude debug files generated by maven debug option from jar
     new 51df52d  Add Volodymyr's PGP key

The 11 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 KEYS                                               | 297 +++++++++++------
 .../org/apache/drill/common/util/GuavaPatcher.java | 232 +++++++-------
 .../apache/drill/common/util/ProtobufPatcher.java  | 154 +++++----
 .../java/org/apache/drill/common/TestVersion.java  |   3 +-
 .../drill/common/exceptions/TestUserException.java |   3 +-
 .../drill/common/map/TestCaseInsensitiveMap.java   |   3 +-
 .../common/util/function/TestCheckedFunction.java  |   3 +-
 .../test/java/org/apache/drill/test/BaseTest.java  |  29 +-
 .../Drill2130CommonHamcrestConfigurationTest.java  |   2 +-
 .../test/java/org/apache/drill/test/DrillTest.java |   4 +-
 .../mapr/drill/maprdb/tests/MaprDBTestsSuite.java  |   3 +-
 .../maprdb/tests/json/TestFieldPathHelper.java     |   3 +-
 .../org/apache/drill/hbase/HBaseTestsSuite.java    |  10 +-
 ...l2130StorageHBaseHamcrestConfigurationTest.java |   3 +-
 .../core/src/main/codegen/includes/license.ftl     |  17 +
 .../inspectors/SkipFooterRecordsInspectorTest.java |   3 +-
 .../store/hive/schema/TestColumnListCache.java     |   3 +-
 .../store/hive/schema/TestSchemaConversion.java    |   3 +-
 ...30StorageHiveCoreHamcrestConfigurationTest.java |   3 +-
 .../exec/store/jdbc/TestJdbcPluginWithH2IT.java    |  11 +
 .../exec/store/jdbc/TestJdbcPluginWithMySQLIT.java |  12 +
 .../drill/exec/store/kafka/TestKafkaSuit.java      |   3 +-
 .../kafka/decoders/MessageReaderFactoryTest.java   |   3 +-
 contrib/storage-kudu/.gitignore                    |  15 -
 .../apache/drill/store/kudu/TestKuduConnect.java   |   3 +-
 .../drill/exec/store/mongo/MongoTestSuit.java      |   3 +-
 .../exec/store/mongo/TestMongoChunkAssignment.java |   3 +-
 distribution/pom.xml                               |  79 ++++-
 distribution/src/assemble/component.xml            |  81 +++--
 distribution/src/{ => main}/resources/LICENSE      |   0
 distribution/src/{ => main}/resources/NOTICE       |   0
 distribution/src/{ => main}/resources/README.md    |   0
 .../src/{ => main}/resources/auto-setup.sh         |   0
 .../src/{ => main}/resources/core-site-example.xml |   0
 .../src/{ => main}/resources/distrib-env.sh        |   0
 .../src/{ => main}/resources/distrib-setup.sh      |   0
 .../src/{ => main}/resources/drill-am-log.xml      |   4 +-
 distribution/src/{ => main}/resources/drill-am.sh  |   0
 distribution/src/{ => main}/resources/drill-conf   |   0
 .../src/{ => main}/resources/drill-config.sh       |   0
 .../src/{ => main}/resources/drill-embedded        |   0
 .../src/{ => main}/resources/drill-embedded.bat    |   0
 distribution/src/{ => main}/resources/drill-env.sh |   0
 .../src/{ => main}/resources/drill-localhost       |   0
 .../resources/drill-on-yarn-example.conf           |   0
 .../src/{ => main}/resources/drill-on-yarn.sh      |   0
 .../resources/drill-override-example.conf          |  48 +--
 .../src/{ => main}/resources/drill-override.conf   |   4 +-
 .../src/{ => main}/resources/drill-setup.sh        |   0
 .../resources/drill-sqlline-override-example.conf  |   0
 distribution/src/{ => main}/resources/drillbit     |   0
 distribution/src/{ => main}/resources/drillbit.sh  |   0
 distribution/src/{ => main}/resources/dumpcat      |   0
 .../src/{ => main}/resources/hadoop-excludes.txt   |   0
 distribution/src/{ => main}/resources/logback.xml  |   8 +-
 distribution/src/{ => main}/resources/runbit       |   0
 .../src/{ => main}/resources/saffron.properties    |   0
 distribution/src/{ => main}/resources/sqlline      |   0
 distribution/src/{ => main}/resources/sqlline.bat  |   1 +
 .../storage-plugins-override-example.conf          |   0
 distribution/src/{ => main}/resources/submit_plan  |   0
 .../src/main/resources/winutils/hadoop.dll         | Bin 0 -> 85504 bytes
 .../src/main/resources/winutils/winutils.exe       | Bin 0 -> 112640 bytes
 .../src/{ => main}/resources/yarn-client-log.xml   |   2 +-
 .../src/{ => main}/resources/yarn-drillbit.sh      |   0
 docs/dev/HadoopWinutils.md                         |  11 +
 docs/dev/MetastoreAnalyze.md                       |  61 +++-
 drill-yarn/pom.xml                                 |   6 +
 .../yarn/appMaster/DrillApplicationMaster.java     |   5 +-
 .../org/apache/drill/yarn/client/DrillOnYarn.java  |   5 +-
 .../org/apache/drill/yarn/client/TestClient.java   |   3 +-
 .../drill/yarn/client/TestCommandLineOptions.java  |   3 +-
 .../org/apache/drill/yarn/core/TestConfig.java     |   3 +-
 .../org/apache/drill/yarn/scripts/TestScripts.java |   3 +-
 .../apache/drill/yarn/zk/TestAmRegistration.java   |   3 +-
 .../org/apache/drill/yarn/zk/TestZkRegistry.java   |   3 +-
 .../java/org/apache/drill/exec/expr/TestPrune.java |  38 ---
 exec/java-exec/pom.xml                             |  59 ++++
 .../src/main/codegen/includes/license.ftl          |  17 +
 .../apache/commons/logging/impl/Log4JLogger.java   |   7 +-
 .../org/apache/drill/exec/expr/IsPredicate.java    |   2 +-
 .../drill/exec/expr/fn/DrillAggFuncHolder.java     |   4 +-
 .../expr/fn/DrillComplexWriterAggFuncHolder.java   |  40 ++-
 .../apache/drill/exec/expr/fn/DrillFuncHolder.java |  78 +++--
 .../drill/exec/metastore/ColumnNamesOptions.java   |  80 +++++
 .../metastore/analyze/AnalyzeFileInfoProvider.java |  24 +-
 .../metastore/analyze/AnalyzeInfoProvider.java     |  20 +-
 .../analyze/AnalyzeParquetInfoProvider.java        |  18 +-
 .../analyze/FileMetadataInfoCollector.java         |   3 +-
 .../analyze/MetadataAggregateContext.java          |  24 ++
 .../base/AbstractGroupScanWithMetadata.java        |  36 ++-
 .../exec/physical/config/HashToMergeExchange.java  |   3 +-
 ...MetadataAggPOP.java => MetadataHashAggPOP.java} |  26 +-
 ...tadataAggPOP.java => MetadataStreamAggPOP.java} |  19 +-
 .../apache/drill/exec/physical/impl/ScanBatch.java |   1 -
 .../drill/exec/physical/impl/TopN/TopNBatch.java   |   6 +-
 .../exec/physical/impl/aggregate/HashAggBatch.java | 138 +++++---
 .../physical/impl/aggregate/HashAggTemplate.java   |  15 +-
 .../physical/impl/aggregate/StreamingAggBatch.java |  15 +-
 .../exec/physical/impl/filter/FilterTemplate2.java |   7 +-
 .../physical/impl/flatten/FlattenRecordBatch.java  |   2 +-
 .../exec/physical/impl/join/HashJoinBatch.java     |   5 +-
 .../physical/impl/join/NestedLoopJoinBatch.java    |   8 +-
 ...aAggBatch.java => MetadataAggregateHelper.java} | 173 ++++++----
 .../impl/metadata/MetadataControllerBatch.java     |  77 +++--
 .../impl/metadata/MetadataHandlerBatch.java        |  49 ++-
 .../impl/metadata/MetadataHashAggBatch.java        |  56 ++++
 ...eator.java => MetadataHashAggBatchCreator.java} |   8 +-
 .../impl/metadata/MetadataStreamAggBatch.java      |  62 ++++
 ...tor.java => MetadataStreamAggBatchCreator.java} |   8 +-
 .../physical/impl/project/ProjectRecordBatch.java  |   4 +-
 .../RangePartitionRecordBatch.java                 |   1 -
 .../impl/scan/file/FileMetadataManager.java        |   8 +-
 .../impl/statistics/StatisticsMergeBatch.java      |  52 ++-
 .../impl/svremover/RemovingRecordBatch.java        |   7 +-
 .../physical/impl/union/UnionAllRecordBatch.java   |   7 +-
 .../physical/impl/unnest/UnnestRecordBatch.java    |   1 +
 .../impl/unpivot/UnpivotMapsRecordBatch.java       |   4 +-
 .../physical/impl/validate/BatchValidator.java     | 106 +------
 .../impl/window/WindowFrameRecordBatch.java        |  14 +-
 .../resultSet/model/single/BaseReaderBuilder.java  |   2 +-
 .../exec/physical/rowSet/RowSetFormatter.java      |  13 +-
 .../apache/drill/exec/planner/PlannerPhase.java    |   2 +
 .../logical/ConvertCountToDirectScanRule.java      |   2 +-
 .../ConvertMetadataAggregateToDirectScanRule.java  | 271 ++++++++++++++++
 .../planner/physical/DrillDistributionTrait.java   |  81 ++++-
 .../drill/exec/planner/physical/HashAggPrule.java  |  14 +-
 .../drill/exec/planner/physical/HashPrelUtil.java  |  37 +--
 .../exec/planner/physical/MetadataAggPrule.java    | 202 +++++++++++-
 .../planner/physical/MetadataHandlerPrule.java     |   2 +-
 ...tadataAggPrel.java => MetadataHashAggPrel.java} |  22 +-
 ...dataAggPrel.java => MetadataStreamAggPrel.java} |  21 +-
 .../drill/exec/planner/physical/PrelUtil.java      |  27 +-
 .../sql/handlers/MetastoreAnalyzeTableHandler.java |  42 +--
 .../apache/drill/exec/record/VectorAccessible.java |   2 -
 .../apache/drill/exec/record/VectorContainer.java  |   6 +
 .../org/apache/drill/exec/server/Drillbit.java     |   5 +-
 .../apache/drill/exec/store/ColumnExplorer.java    |  25 +-
 .../drill/exec/store/LocalSyncableFileSystem.java  |  10 +-
 .../exec/store/easy/json/JSONRecordReader.java     |  22 +-
 .../drill/exec/store/ischema/FilterEvaluator.java  |   2 +-
 .../drill/exec/store/ischema/RecordCollector.java  |   2 +-
 .../store/parquet/AbstractParquetGroupScan.java    |  98 +++++-
 .../ParquetFileTableMetadataProviderBuilder.java   |   4 +
 .../exec/store/pojo/DynamicPojoRecordReader.java   |  25 +-
 .../java/org/apache/drill/BaseTestInheritance.java |  54 ++++
 .../java/org/apache/drill/TestImplicitCasting.java |   3 +-
 .../drill/common/scanner/TestClassPathScanner.java |   3 +-
 .../org/apache/drill/exec/TestOpSerialization.java |   3 +-
 .../java/org/apache/drill/exec/TestSSLConfig.java  |   3 +-
 .../ConnectTriesPropertyTestClusterBits.java       |   3 +-
 .../exec/client/DrillSqlLineApplicationTest.java   |   3 +-
 .../drill/exec/compile/TestEvaluationVisitor.java  |   3 +-
 .../drill/exec/coord/zk/TestEphemeralStore.java    |   3 +-
 .../drill/exec/coord/zk/TestEventDispatcher.java   |   3 +-
 .../apache/drill/exec/coord/zk/TestPathUtils.java  |   3 +-
 .../org/apache/drill/exec/coord/zk/TestZKACL.java  |   3 +-
 .../drill/exec/coord/zk/TestZookeeperClient.java   |   3 +-
 .../drill/exec/dotdrill/TestDotDrillUtil.java      |   3 +-
 .../exec/expr/fn/FunctionInitializerTest.java      |   3 +-
 .../drill/exec/expr/fn/impl/TestSqlPatterns.java   |   3 +-
 .../fn/registry/FunctionRegistryHolderTest.java    |   3 +-
 .../drill/exec/fn/impl/TestAggregateFunction.java  |   2 +-
 .../drill/exec/fn/impl/TestAggregateFunctions.java | 206 +++++++++---
 .../impersonation/TestImpersonationMetadata.java   |  15 +-
 .../physical/impl/agg/TestAggWithAnyValue.java     | 304 ++++++++++++++----
 .../physical/impl/agg/TestHashAggEmitOutcome.java  | 205 ++++++------
 .../physical/impl/common/HashPartitionTest.java    |   3 +-
 .../common/HashTableAllocationTrackerTest.java     |   3 +-
 .../impl/join/TestBatchSizePredictorImpl.java      |   3 +-
 .../impl/join/TestBuildSidePartitioningImpl.java   |   3 +-
 .../join/TestHashJoinHelperSizeCalculatorImpl.java |   3 +-
 .../impl/join/TestHashJoinMemoryCalculator.java    |   3 +-
 ...estHashTableSizeCalculatorConservativeImpl.java |   3 +-
 .../join/TestHashTableSizeCalculatorLeanImpl.java  |   3 +-
 .../exec/physical/impl/join/TestPartitionStat.java |   5 +-
 .../impl/join/TestPostBuildCalculationsImpl.java   |   3 +-
 .../scan/project/projSet/TestProjectionSet.java    |   3 +-
 .../impl/svremover/AbstractGenericCopierTest.java  |   3 +-
 .../resultSet/project/TestProjectedTuple.java      |   3 +-
 .../resultSet/project/TestProjectionType.java      |   3 +-
 .../exec/physical/unit/TestOutputBatchSize.java    |   6 +-
 .../common/TestNumericEquiDepthHistogram.java      |   3 +-
 .../TestHardAffinityFragmentParallelizer.java      |   3 +-
 .../drill/exec/planner/logical/DrillOptiqTest.java |   3 +-
 .../exec/planner/logical/FilterSplitTest.java      |   3 +-
 .../drill/exec/record/TestMaterializedField.java   |   3 +-
 .../record/metadata/schema/TestSchemaProvider.java |   3 +-
 .../exec/resourcemgr/TestResourcePoolTree.java     |   3 +-
 .../TestBestFitSelectionPolicy.java                |   3 +-
 .../TestDefaultSelectionPolicy.java                |   3 +-
 .../config/selectors/TestAclSelector.java          |   3 +-
 .../config/selectors/TestComplexSelectors.java     |   3 +-
 .../config/selectors/TestNotEqualSelector.java     |   3 +-
 .../selectors/TestResourcePoolSelectors.java       |   3 +-
 .../config/selectors/TestTagSelector.java          |   3 +-
 .../rpc/control/ConnectionManagerRegistryTest.java |   3 +-
 .../control/TestLocalControlConnectionManager.java |   3 +-
 .../apache/drill/exec/server/TestFailureUtils.java |   3 +-
 .../drill/exec/server/options/OptionValueTest.java |   3 +-
 .../server/options/PersistedOptionValueTest.java   |   3 +-
 .../exec/server/options/TestConfigLinkage.java     |   3 +-
 .../exec/server/rest/StatusResourcesTest.java      |   3 +-
 .../exec/server/rest/TestMainLoginPageModel.java   |   3 +-
 .../exec/server/rest/WebSessionResourcesTest.java  |   3 +-
 .../rest/spnego/TestDrillSpnegoAuthenticator.java  |   3 +-
 .../rest/spnego/TestSpnegoAuthentication.java      |   3 +-
 .../exec/server/rest/spnego/TestSpnegoConfig.java  |   3 +-
 .../drill/exec/sql/TestMetastoreCommands.java      | 285 ++++++++++++++---
 .../drill/exec/sql/TestSqlBracketlessSyntax.java   |   3 +-
 .../drill/exec/store/StorageStrategyTest.java      |   3 +-
 .../exec/store/bson/TestBsonRecordReader.java      |   3 +-
 .../drill/exec/store/dfs/TestDrillFileSystem.java  |   3 +-
 .../store/dfs/TestFormatPluginOptionExtractor.java |   3 +-
 .../store/parquet/TestComplexColumnInSchema.java   |   3 +-
 .../store/parquet/TestParquetMetadataVersion.java  |   3 +-
 .../store/parquet/TestParquetReaderConfig.java     |   3 +-
 .../store/parquet/TestParquetReaderUtility.java    |   3 +-
 .../drill/exec/store/store/TestAssignment.java     |   3 +-
 ...Drill2130JavaExecHamcrestConfigurationTest.java |   3 +-
 .../drill/exec/util/DrillExceptionUtilTest.java    |   3 +-
 .../drill/exec/util/FileSystemUtilTestBase.java    |   3 +-
 .../exec/util/TestApproximateStringMatcher.java    |   3 +-
 .../drill/exec/util/TestArrayWrappedIntIntMap.java |   3 +-
 .../exec/util/TestValueVectorElementFormatter.java |   3 +-
 .../drill/exec/vector/TestSplitAndTransfer.java    |   3 +-
 .../exec/vector/accessor/GenericAccessorTest.java  |   3 +-
 .../exec/vector/accessor/TestTimePrintMillis.java  |   3 +-
 .../exec/vector/complex/writer/TestJsonReader.java | 159 ++++++----
 .../complex/writer/TestPromotableWriter.java       |   3 +-
 .../exec/vector/complex/writer/TestRepeated.java   |   3 +-
 .../org/apache/drill/exec/work/batch/FileTest.java |   4 +-
 .../drill/exec/work/filter/BloomFilterTest.java    |   3 +-
 .../work/fragment/FragmentStatusReporterTest.java  |   3 +-
 .../exec/work/metadata/TestMetadataProvider.java   |   2 +-
 .../java/org/apache/drill/test/ExampleTest.java    |   2 +-
 .../test/rowSet/test/TestRowSetComparison.java     |   3 +-
 .../test/resources/functions/test_covariance.json  |   6 +-
 .../resources/functions/test_logical_aggr.json     |   6 +-
 exec/jdbc-all/pom.xml                              |   5 +-
 .../org/apache/drill/jdbc/DrillbitClassLoader.java |  45 +--
 .../org/apache/drill/jdbc/ITTestShadedJar.java     |   9 +-
 .../jdbc/ConnectionTransactionMethodsTest.java     |   3 +-
 .../apache/drill/jdbc/DatabaseMetaDataTest.java    |   3 +-
 .../drill/jdbc/DrillColumnMetaDataListTest.java    |   3 +-
 .../jdbc/impl/TypeConvertingSqlAccessorTest.java   |   3 +-
 ...Drill2130JavaJdbcHamcrestConfigurationTest.java |   3 +-
 .../Drill2288GetColumnsMetadataWhenNoRowsTest.java |   3 +-
 .../apache/drill/jdbc/test/TestJdbcMetadata.java   |   2 +-
 .../drill/exec/memory/BoundsCheckingTest.java      |   4 +-
 .../apache/drill/exec/memory/TestAccountant.java   |   3 +-
 .../drill/exec/memory/TestBaseAllocator.java       |   3 +-
 .../apache/drill/exec/memory/TestEndianess.java    |   3 +-
 exec/vector/src/main/codegen/includes/license.ftl  |  17 +
 .../src/main/codegen/templates/ComplexWriters.java |   6 +-
 .../record/metadata/TestMetadataProperties.java    |   3 +-
 .../schema/parser/TestParserErrorHandling.java     |   3 +-
 .../metadata/schema/parser/TestSchemaParser.java   |   3 +-
 .../exec/vector/VariableLengthVectorTest.java      |   4 +-
 .../org/apache/drill/exec/vector/VectorTest.java   |   3 +-
 .../expression/FunctionHolderExpression.java       |   2 +-
 .../common/logical/data/MetadataAggregate.java     |   2 +-
 .../drill/common/expression/SchemaPathTest.java    |   3 +-
 .../expression/fn/JodaDateValidatorTest.java       |   3 +-
 .../drill/common/logical/data/OrderTest.java       |   3 +-
 .../drill/metastore/iceberg/IcebergBaseTest.java   |  11 +-
 .../components/tables/TestBasicTablesRequests.java |   3 +-
 .../tables/TestBasicTablesTransformer.java         |   3 +-
 .../components/tables/TestMetastoreTableInfo.java  |   3 +-
 .../tables/TestTableMetadataUnitConversion.java    |   3 +-
 .../metastore/metadata/MetadataSerDeTest.java      |   3 +-
 pom.xml                                            | 350 +++++++++++++--------
 272 files changed, 3662 insertions(+), 1791 deletions(-)
 copy exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/output/OutputWidthCalculator.java => common/src/test/java/org/apache/drill/test/BaseTest.java (52%)
 delete mode 100644 contrib/storage-kudu/.gitignore
 rename distribution/src/{ => main}/resources/LICENSE (100%)
 rename distribution/src/{ => main}/resources/NOTICE (100%)
 rename distribution/src/{ => main}/resources/README.md (100%)
 rename distribution/src/{ => main}/resources/auto-setup.sh (100%)
 rename distribution/src/{ => main}/resources/core-site-example.xml (100%)
 rename distribution/src/{ => main}/resources/distrib-env.sh (100%)
 rename distribution/src/{ => main}/resources/distrib-setup.sh (100%)
 rename distribution/src/{ => main}/resources/drill-am-log.xml (99%)
 rename distribution/src/{ => main}/resources/drill-am.sh (100%)
 rename distribution/src/{ => main}/resources/drill-conf (100%)
 rename distribution/src/{ => main}/resources/drill-config.sh (100%)
 rename distribution/src/{ => main}/resources/drill-embedded (100%)
 rename distribution/src/{ => main}/resources/drill-embedded.bat (100%)
 rename distribution/src/{ => main}/resources/drill-env.sh (100%)
 rename distribution/src/{ => main}/resources/drill-localhost (100%)
 rename distribution/src/{ => main}/resources/drill-on-yarn-example.conf (100%)
 rename distribution/src/{ => main}/resources/drill-on-yarn.sh (100%)
 rename distribution/src/{ => main}/resources/drill-override-example.conf (88%)
 rename distribution/src/{ => main}/resources/drill-override.conf (97%)
 rename distribution/src/{ => main}/resources/drill-setup.sh (100%)
 rename distribution/src/{ => main}/resources/drill-sqlline-override-example.conf (100%)
 rename distribution/src/{ => main}/resources/drillbit (100%)
 rename distribution/src/{ => main}/resources/drillbit.sh (100%)
 rename distribution/src/{ => main}/resources/dumpcat (100%)
 rename distribution/src/{ => main}/resources/hadoop-excludes.txt (100%)
 rename distribution/src/{ => main}/resources/logback.xml (99%)
 rename distribution/src/{ => main}/resources/runbit (100%)
 rename distribution/src/{ => main}/resources/saffron.properties (100%)
 rename distribution/src/{ => main}/resources/sqlline (100%)
 rename distribution/src/{ => main}/resources/sqlline.bat (96%)
 rename distribution/src/{ => main}/resources/storage-plugins-override-example.conf (100%)
 rename distribution/src/{ => main}/resources/submit_plan (100%)
 create mode 100644 distribution/src/main/resources/winutils/hadoop.dll
 create mode 100644 distribution/src/main/resources/winutils/winutils.exe
 rename distribution/src/{ => main}/resources/yarn-client-log.xml (99%)
 rename distribution/src/{ => main}/resources/yarn-drillbit.sh (100%)
 create mode 100644 docs/dev/HadoopWinutils.md
 delete mode 100644 exec/interpreter/src/test/java/org/apache/drill/exec/expr/TestPrune.java
 copy common/src/test/java/org/apache/drill/categories/HbaseStorageTest.java => exec/java-exec/src/main/java/org/apache/commons/logging/impl/Log4JLogger.java (73%)
 create mode 100644 exec/java-exec/src/main/java/org/apache/drill/exec/metastore/ColumnNamesOptions.java
 copy exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/{MetadataAggPOP.java => MetadataHashAggPOP.java} (62%)
 rename exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/{MetadataAggPOP.java => MetadataStreamAggPOP.java} (72%)
 rename exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/{MetadataAggBatch.java => MetadataAggregateHelper.java} (68%)
 create mode 100644 exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHashAggBatch.java
 copy exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/{MetadataAggBatchCreator.java => MetadataHashAggBatchCreator.java} (80%)
 create mode 100644 exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataStreamAggBatch.java
 rename exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/{MetadataAggBatchCreator.java => MetadataStreamAggBatchCreator.java} (80%)
 create mode 100644 exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertMetadataAggregateToDirectScanRule.java
 copy exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/{MetadataAggPrel.java => MetadataHashAggPrel.java} (78%)
 rename exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/{MetadataAggPrel.java => MetadataStreamAggPrel.java} (77%)
 create mode 100644 exec/java-exec/src/test/java/org/apache/drill/BaseTestInheritance.java


[drill] 06/11: DRILL-7324: Final set of "batch count" fixes

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit 93a39cd9b0790129bb6888cbfefc100efd8a6bc8
Author: Paul Rogers <pa...@yahoo.com>
AuthorDate: Fri Nov 29 18:58:59 2019 -0800

    DRILL-7324: Final set of "batch count" fixes
    
    Final set of fixes for batch count/record count issues. Enables
    vector checking for all operators.
    
    closes #1912
---
 .../apache/drill/exec/physical/impl/ScanBatch.java |   1 -
 .../drill/exec/physical/impl/TopN/TopNBatch.java   |   6 +-
 .../exec/physical/impl/aggregate/HashAggBatch.java |   2 +-
 .../physical/impl/aggregate/StreamingAggBatch.java |   4 +-
 .../exec/physical/impl/filter/FilterTemplate2.java |   7 +-
 .../exec/physical/impl/join/HashJoinBatch.java     |   5 +-
 .../physical/impl/join/NestedLoopJoinBatch.java    |   8 +-
 .../impl/metadata/MetadataControllerBatch.java     |  33 ++---
 .../RangePartitionRecordBatch.java                 |   1 -
 .../impl/statistics/StatisticsMergeBatch.java      |  52 +++----
 .../impl/svremover/RemovingRecordBatch.java        |   7 +-
 .../physical/impl/union/UnionAllRecordBatch.java   |   7 +-
 .../physical/impl/unnest/UnnestRecordBatch.java    |   1 +
 .../impl/unpivot/UnpivotMapsRecordBatch.java       |   4 +-
 .../physical/impl/validate/BatchValidator.java     | 114 +--------------
 .../impl/window/WindowFrameRecordBatch.java        |  14 +-
 .../apache/drill/exec/record/VectorAccessible.java |   2 -
 .../apache/drill/exec/record/VectorContainer.java  |   6 +
 .../exec/store/easy/json/JSONRecordReader.java     |  22 +--
 .../exec/vector/complex/writer/TestJsonReader.java | 159 +++++++++++++--------
 20 files changed, 177 insertions(+), 278 deletions(-)

diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java
index 3e658cb..f464b27 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java
@@ -575,7 +575,6 @@ public class ScanBatch implements CloseableRecordBatch {
     }
   }
 
-
   @Override
   public Iterator<VectorWrapper<?>> iterator() {
     return container.iterator();
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/TopN/TopNBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/TopN/TopNBatch.java
index 5ae6e76..baef314 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/TopN/TopNBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/TopN/TopNBatch.java
@@ -559,12 +559,8 @@ public class TopNBatch extends AbstractRecordBatch<TopN> {
       // Transfers count number of records from hyperBatch to simple container
       final int copiedRecords = copier.copyRecords(0, count);
       assert copiedRecords == count;
-      for (VectorWrapper<?> v : newContainer) {
-        ValueVector.Mutator m = v.getValueVector().getMutator();
-        m.setValueCount(count);
-      }
       newContainer.buildSchema(BatchSchema.SelectionVectorMode.NONE);
-      newContainer.setRecordCount(count);
+      newContainer.setValueCount(count);
       // Store all the batches containing limit number of records
       batchBuilder.add(newBatch);
     } while (queueSv4.next());
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java
index 45c670b..38fb14e 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java
@@ -269,7 +269,7 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
     for (VectorWrapper<?> w : container) {
       AllocationHelper.allocatePrecomputedChildCount(w.getValueVector(), 0, 0, 0);
     }
-    container.setValueCount(0);
+    container.setEmpty();
     if (incoming.getRecordCount() > 0) {
       hashAggMemoryManager.update();
     }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java
index c3b504a..586fa32 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java
@@ -186,9 +186,7 @@ public class StreamingAggBatch extends AbstractRecordBatch<StreamingAggregate> {
     if (!createAggregator()) {
       state = BatchState.DONE;
     }
-    for (VectorWrapper<?> w : container) {
-      w.getValueVector().allocateNew();
-    }
+    container.allocateNew();
 
     if (complexWriters != null) {
       container.buildSchema(SelectionVectorMode.NONE);
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/filter/FilterTemplate2.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/filter/FilterTemplate2.java
index c189367..e3b6070 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/filter/FilterTemplate2.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/filter/FilterTemplate2.java
@@ -66,12 +66,7 @@ public abstract class FilterTemplate2 implements Filterer {
     if (recordCount == 0) {
       outgoingSelectionVector.setRecordCount(0);
       outgoingSelectionVector.setBatchActualRecordCount(0);
-
-      // Must allocate vectors, then set count to zero. Allocation
-      // is needed since offset vectors must contain at least one
-      // item (the required value of 0 in index location 0.)
-      outgoing.getContainer().allocateNew();
-      outgoing.getContainer().setValueCount(0);
+      outgoing.getContainer().setEmpty();
       return;
     }
     if (! outgoingSelectionVector.allocateNewSafe(recordCount)) {
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java
index eab38ec..cded844 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/HashJoinBatch.java
@@ -77,6 +77,7 @@ import org.apache.drill.exec.record.JoinBatchMemoryManager;
 import org.apache.drill.exec.record.MaterializedField;
 import org.apache.drill.exec.record.RecordBatch;
 import org.apache.drill.exec.record.TypedFieldId;
+import org.apache.drill.exec.record.VectorAccessibleUtilities;
 import org.apache.drill.exec.record.VectorContainer;
 import org.apache.drill.exec.record.VectorWrapper;
 import org.apache.drill.exec.util.record.RecordBatchStats;
@@ -703,9 +704,7 @@ public class HashJoinBatch extends AbstractBinaryRecordBatch<HashJoinPOP> implem
   private void killAndDrainUpstream(RecordBatch batch, IterOutcome upstream, boolean isLeft) {
       batch.kill(true);
       while (upstream == IterOutcome.OK_NEW_SCHEMA || upstream == IterOutcome.OK) {
-        for (VectorWrapper<?> wrapper : batch) {
-          wrapper.getValueVector().clear();
-        }
+        VectorAccessibleUtilities.clear(batch);
         upstream = next( isLeft ? HashJoinHelper.LEFT_INPUT : HashJoinHelper.RIGHT_INPUT, batch);
       }
   }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/NestedLoopJoinBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/NestedLoopJoinBatch.java
index 6b7edd2..c84f954 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/NestedLoopJoinBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/join/NestedLoopJoinBatch.java
@@ -87,10 +87,10 @@ public class NestedLoopJoinBatch extends AbstractBinaryRecordBatch<NestedLoopJoi
   private int outputRecords;
 
   // We accumulate all the batches on the right side in a hyper container.
-  private ExpandableHyperContainer rightContainer = new ExpandableHyperContainer();
+  private final ExpandableHyperContainer rightContainer = new ExpandableHyperContainer();
 
   // Record count of the individual batches in the right hyper container
-  private LinkedList<Integer> rightCounts = new LinkedList<>();
+  private final LinkedList<Integer> rightCounts = new LinkedList<>();
 
 
   // Generator mapping for the right side
@@ -372,9 +372,7 @@ public class NestedLoopJoinBatch extends AbstractBinaryRecordBatch<NestedLoopJoi
 
       if (leftUpstream != IterOutcome.NONE) {
         leftSchema = left.getSchema();
-        for (final VectorWrapper<?> vw : left) {
-          container.addOrGet(vw.getField());
-        }
+        container.copySchemaFrom(left);
       }
 
       if (rightUpstream != IterOutcome.NONE) {
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java
index 9ccae49..ab82769 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java
@@ -17,6 +17,17 @@
  */
 package org.apache.drill.exec.physical.impl.metadata;
 
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+
 import org.apache.commons.lang3.StringUtils;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.types.TypeProtos;
@@ -24,6 +35,7 @@ import org.apache.drill.common.types.Types;
 import org.apache.drill.exec.exception.OutOfMemoryException;
 import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.metastore.analyze.AnalyzeColumnUtils;
+import org.apache.drill.exec.metastore.analyze.MetadataIdentifierUtils;
 import org.apache.drill.exec.metastore.analyze.MetastoreAnalyzeConstants;
 import org.apache.drill.exec.ops.FragmentContext;
 import org.apache.drill.exec.physical.config.MetadataControllerPOP;
@@ -32,7 +44,6 @@ import org.apache.drill.exec.physical.rowSet.RowSetReader;
 import org.apache.drill.exec.planner.common.DrillStatsTable;
 import org.apache.drill.exec.planner.physical.PlannerSettings;
 import org.apache.drill.exec.planner.physical.WriterPrel;
-import org.apache.drill.exec.metastore.analyze.MetadataIdentifierUtils;
 import org.apache.drill.exec.record.AbstractBinaryRecordBatch;
 import org.apache.drill.exec.record.BatchSchema;
 import org.apache.drill.exec.record.RecordBatch;
@@ -80,17 +91,6 @@ import org.apache.hadoop.fs.Path;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.function.Function;
-import java.util.stream.Collectors;
-
 /**
  * Terminal operator for producing ANALYZE statement. This operator is responsible for converting
  * obtained metadata, fetching absent metadata from the Metastore and storing resulting metadata into the Metastore.
@@ -109,9 +109,9 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
 
   private boolean firstLeft = true;
   private boolean firstRight = true;
-  private boolean finished = false;
-  private boolean finishedRight = false;
-  private int recordCount = 0;
+  private boolean finished;
+  private boolean finishedRight;
+  private int recordCount;
 
   protected MetadataControllerBatch(MetadataControllerPOP popConfig,
       FragmentContext context, RecordBatch left, RecordBatch right) throws OutOfMemoryException {
@@ -129,13 +129,10 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
 
   protected boolean setupNewSchema() {
     container.clear();
-
     container.addOrGet(MetastoreAnalyzeConstants.OK_FIELD_NAME, Types.required(TypeProtos.MinorType.BIT), null);
     container.addOrGet(MetastoreAnalyzeConstants.SUMMARY_FIELD_NAME, Types.required(TypeProtos.MinorType.VARCHAR), null);
-
     container.buildSchema(BatchSchema.SelectionVectorMode.NONE);
     container.setEmpty();
-
     return true;
   }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/rangepartitioner/RangePartitionRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/rangepartitioner/RangePartitionRecordBatch.java
index 11d307b..7a61489 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/rangepartitioner/RangePartitionRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/rangepartitioner/RangePartitionRecordBatch.java
@@ -184,5 +184,4 @@ public class RangePartitionRecordBatch extends AbstractSingleRecordBatch<RangePa
     logger.error("RangePartitionRecordBatch[container={}, numPartitions={}, recordCount={}, partitionIdVector={}]",
         container, numPartitions, recordCount, partitionIdVector);
   }
-
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/statistics/StatisticsMergeBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/statistics/StatisticsMergeBatch.java
index 15962ad..921c92b 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/statistics/StatisticsMergeBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/statistics/StatisticsMergeBatch.java
@@ -23,6 +23,7 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.TimeZone;
+
 import org.apache.drill.common.exceptions.UserException;
 import org.apache.drill.common.expression.LogicalExpression;
 import org.apache.drill.common.expression.ValueExpressions;
@@ -46,11 +47,12 @@ import org.apache.drill.exec.vector.ValueVector;
 import org.apache.drill.exec.vector.complex.MapVector;
 import org.apache.drill.metastore.statistics.Statistic;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
- *
  * Example input and output:
- * Schema of incoming batch:
+ * Schema of incoming batch:<pre>
  *    "columns"       : MAP - Column names
  *       "region_id"  : VARCHAR
  *       "sales_city" : VARCHAR
@@ -65,7 +67,7 @@ import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
  *       "sales_city" : BIGINT - nonnullstatcount(sales_city)
  *       "cnt"        : BIGINT - nonnullstatcount(cnt)
  *   .... another map for next stats function ....
- * Schema of outgoing batch:
+ * </pre>Schema of outgoing batch:<pre>
  *    "schema" : BIGINT - Schema number. For each schema change this number is incremented.
  *    "computed" : DATE - What time is it computed?
  *    "columns"       : MAP - Column names
@@ -82,17 +84,19 @@ import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
  *       "sales_city" : BIGINT - nonnullstatcount(sales_city)
  *       "cnt"        : BIGINT - nonnullstatcount(cnt)
  *   .... another map for next stats function ....
+ * </pre>
  */
+
 public class StatisticsMergeBatch extends AbstractSingleRecordBatch<StatisticsMerge> {
-  private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(StatisticsMergeBatch.class);
-  private Map<String, String> functions;
+  private static final Logger logger = LoggerFactory.getLogger(StatisticsMergeBatch.class);
+
+  private final Map<String, String> functions;
   private boolean first = true;
-  private boolean finished = false;
-  private int schema = 0;
-  private int recordCount = 0;
-  private List<String> columnsList = null;
+  private boolean finished;
+  private int schema;
+  private List<String> columnsList;
   private double samplePercent = 100.0;
-  private List<MergedStatistic> mergedStatisticList = null;
+  private final List<MergedStatistic> mergedStatisticList;
 
   public StatisticsMergeBatch(StatisticsMerge popConfig, RecordBatch incoming,
       FragmentContext context) throws OutOfMemoryException {
@@ -115,20 +119,6 @@ public class StatisticsMergeBatch extends AbstractSingleRecordBatch<StatisticsMe
   }
 
   /*
-   * Adds the `name` column value vector in the `parent` map vector. These `name` columns are
-   * table columns for which statistics will be computed.
-   */
-  private ValueVector addMapVector(String name, MapVector parent, LogicalExpression expr)
-      throws SchemaChangeException {
-    LogicalExpression mle = PhysicalOperatorUtil.materializeExpression(expr, incoming, context);
-    Class<? extends ValueVector> vvc =
-        TypeHelper.getValueVectorClass(mle.getMajorType().getMinorType(),
-            mle.getMajorType().getMode());
-    ValueVector vector = parent.addOrGet(name, mle.getMajorType(), vvc);
-    return vector;
-  }
-
-  /*
    * Identify the list of fields within a map which are generated by StatisticsMerge. Perform
    * basic sanity check i.e. all maps have the same number of columns and those columns are
    * the same in each map
@@ -229,8 +219,7 @@ public class StatisticsMergeBatch extends AbstractSingleRecordBatch<StatisticsMe
         }
       }
     }
-    container.setRecordCount(0);
-    recordCount = 0;
+    container.setEmpty();
     container.buildSchema(incoming.getSchema().getSelectionVectorMode());
   }
 
@@ -238,7 +227,7 @@ public class StatisticsMergeBatch extends AbstractSingleRecordBatch<StatisticsMe
    * Determines the MajorType based on the incoming value vector. Please look at the
    * comments above the class definition which describes the incoming/outgoing batch schema
    */
-  private void addVectorToOutgoingContainer(String outStatName, VectorWrapper vw)
+  private void addVectorToOutgoingContainer(String outStatName, VectorWrapper<?> vw)
       throws SchemaChangeException {
     // Input map vector
     MapVector inputVector = (MapVector) vw.getValueVector();
@@ -306,9 +295,8 @@ public class StatisticsMergeBatch extends AbstractSingleRecordBatch<StatisticsMe
         }
       }
     }
-    ++recordCount;
     // Populate the number of records (1) inside the outgoing batch.
-    container.setRecordCount(1);
+    container.setValueCount(1);
     return IterOutcome.OK;
   }
 
@@ -343,9 +331,7 @@ public class StatisticsMergeBatch extends AbstractSingleRecordBatch<StatisticsMe
   }
 
   @Override
-  public void dump() {
-
-  }
+  public void dump() { }
 
   @Override
   public IterOutcome innerNext() {
@@ -404,6 +390,6 @@ public class StatisticsMergeBatch extends AbstractSingleRecordBatch<StatisticsMe
 
   @Override
   public int getRecordCount() {
-    return recordCount;
+    return container.getRecordCount();
   }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/svremover/RemovingRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/svremover/RemovingRecordBatch.java
index 4471248..a9584bb 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/svremover/RemovingRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/svremover/RemovingRecordBatch.java
@@ -25,13 +25,16 @@ import org.apache.drill.exec.record.AbstractSingleRecordBatch;
 import org.apache.drill.exec.record.BatchSchema.SelectionVectorMode;
 import org.apache.drill.exec.record.RecordBatch;
 import org.apache.drill.exec.record.WritableBatch;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class RemovingRecordBatch extends AbstractSingleRecordBatch<SelectionVectorRemover>{
-  private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(RemovingRecordBatch.class);
+  private static final Logger logger = LoggerFactory.getLogger(RemovingRecordBatch.class);
 
   private Copier copier;
 
-  public RemovingRecordBatch(SelectionVectorRemover popConfig, FragmentContext context, RecordBatch incoming) throws OutOfMemoryException {
+  public RemovingRecordBatch(SelectionVectorRemover popConfig, FragmentContext context,
+      RecordBatch incoming) throws OutOfMemoryException {
     super(popConfig, context, incoming);
     logger.debug("Created.");
   }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/union/UnionAllRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/union/UnionAllRecordBatch.java
index 25dae80..ed2b66e 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/union/UnionAllRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/union/UnionAllRecordBatch.java
@@ -18,6 +18,7 @@
 package org.apache.drill.exec.physical.impl.union;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Iterator;
 import java.util.List;
 import java.util.NoSuchElementException;
@@ -68,8 +69,8 @@ public class UnionAllRecordBatch extends AbstractBinaryRecordBatch<UnionAll> {
 
   private final SchemaChangeCallBack callBack = new SchemaChangeCallBack();
   private UnionAller unionall;
-  private final List<TransferPair> transfers = Lists.newArrayList();
-  private final List<ValueVector> allocationVectors = Lists.newArrayList();
+  private final List<TransferPair> transfers = new ArrayList<>();
+  private final List<ValueVector> allocationVectors = new ArrayList<>();
   private int recordCount;
   private UnionInputIterator unionInputIterator;
 
@@ -341,7 +342,7 @@ public class UnionAllRecordBatch extends AbstractBinaryRecordBatch<UnionAll> {
   }
 
   private class UnionInputIterator implements Iterator<Pair<IterOutcome, BatchStatusWrappper>> {
-    private Stack<BatchStatusWrappper> batchStatusStack = new Stack<>();
+    private final Stack<BatchStatusWrappper> batchStatusStack = new Stack<>();
 
     UnionInputIterator(IterOutcome leftOutCome, RecordBatch left, IterOutcome rightOutCome, RecordBatch right) {
       if (rightOutCome == IterOutcome.OK_NEW_SCHEMA) {
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java
index 1715c99..85eceea 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unnest/UnnestRecordBatch.java
@@ -295,6 +295,7 @@ public class UnnestRecordBatch extends AbstractTableFunctionRecordBatch<UnnestPO
       remainderIndex = 0;
       logger.debug("IterOutcome: EMIT.");
     }
+    rowIdVector.getMutator().setValueCount(outputRecords);
     container.setValueCount(outputRecords);
 
     memoryManager.updateOutgoingStats(outputRecords);
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unpivot/UnpivotMapsRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unpivot/UnpivotMapsRecordBatch.java
index 99bd6d1..72a337a 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unpivot/UnpivotMapsRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/unpivot/UnpivotMapsRecordBatch.java
@@ -64,8 +64,8 @@ import org.slf4j.LoggerFactory;
  *       "sales_city" : BIGINT - nonnullstatcount(sales_city)
  *       "cnt"        : BIGINT - nonnullstatcount(cnt)
  *   .... another map for next stats function ....
- *
- * Schema of output:
+ * </pre>
+ * Schema of output: <pre>
  *  "schema"           : BIGINT - Schema number. For each schema change this number is incremented.
  *  "computed"         : BIGINT - What time is this computed?
  *  "column"           : column name
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java
index 8793a65..e1ffd7a 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java
@@ -17,41 +17,7 @@
  */
 package org.apache.drill.exec.physical.impl.validate;
 
-import java.util.IdentityHashMap;
-import java.util.Map;
-
 import org.apache.drill.common.types.TypeProtos.MinorType;
-import org.apache.drill.exec.physical.impl.ScanBatch;
-import org.apache.drill.exec.physical.impl.WriterRecordBatch;
-import org.apache.drill.exec.physical.impl.TopN.TopNBatch;
-import org.apache.drill.exec.physical.impl.aggregate.HashAggBatch;
-import org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch;
-import org.apache.drill.exec.physical.impl.filter.FilterRecordBatch;
-import org.apache.drill.exec.physical.impl.filter.RuntimeFilterRecordBatch;
-import org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch;
-import org.apache.drill.exec.physical.impl.join.HashJoinBatch;
-import org.apache.drill.exec.physical.impl.join.MergeJoinBatch;
-import org.apache.drill.exec.physical.impl.join.NestedLoopJoinBatch;
-import org.apache.drill.exec.physical.impl.limit.LimitRecordBatch;
-import org.apache.drill.exec.physical.impl.limit.PartitionLimitRecordBatch;
-import org.apache.drill.exec.physical.impl.mergereceiver.MergingRecordBatch;
-import org.apache.drill.exec.physical.impl.orderedpartitioner.OrderedPartitionRecordBatch;
-import org.apache.drill.exec.physical.impl.metadata.MetadataHashAggBatch;
-import org.apache.drill.exec.physical.impl.metadata.MetadataStreamAggBatch;
-import org.apache.drill.exec.physical.impl.metadata.MetadataControllerBatch;
-import org.apache.drill.exec.physical.impl.metadata.MetadataHandlerBatch;
-import org.apache.drill.exec.physical.impl.project.ProjectRecordBatch;
-import org.apache.drill.exec.physical.impl.protocol.OperatorRecordBatch;
-import org.apache.drill.exec.physical.impl.rangepartitioner.RangePartitionRecordBatch;
-import org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch;
-import org.apache.drill.exec.physical.impl.trace.TraceRecordBatch;
-import org.apache.drill.exec.physical.impl.union.UnionAllRecordBatch;
-import org.apache.drill.exec.physical.impl.unnest.UnnestRecordBatch;
-import org.apache.drill.exec.physical.impl.unorderedreceiver.UnorderedReceiverBatch;
-import org.apache.drill.exec.physical.impl.unpivot.UnpivotMapsRecordBatch;
-import org.apache.drill.exec.physical.impl.window.WindowFrameRecordBatch;
-import org.apache.drill.exec.physical.impl.xsort.managed.ExternalSortBatch;
-import org.apache.drill.exec.record.CloseableRecordBatch;
 import org.apache.drill.exec.record.RecordBatch;
 import org.apache.drill.exec.record.SimpleVectorWrapper;
 import org.apache.drill.exec.record.VectorAccessible;
@@ -200,89 +166,13 @@ public class BatchValidator {
     }
   }
 
-  private enum CheckMode {
-    /** No checking. */
-    NONE,
-    /** Check only batch, container counts. */
-    COUNTS,
-    /** Check vector value counts. */
-    VECTORS
-    };
-
-  private static final Map<Class<? extends CloseableRecordBatch>, CheckMode> checkRules = buildRules();
-
   private final ErrorReporter errorReporter;
 
   public BatchValidator(ErrorReporter errorReporter) {
     this.errorReporter = errorReporter;
   }
 
-  /**
-   * At present, most operators will not pass the checks here. The following
-   * table identifies those that should be checked, and the degree of check.
-   * Over time, this table should include all operators, and thus become
-   * unnecessary.
-   */
-  private static Map<Class<? extends CloseableRecordBatch>, CheckMode> buildRules() {
-    Map<Class<? extends CloseableRecordBatch>, CheckMode> rules = new IdentityHashMap<>();
-    rules.put(OperatorRecordBatch.class, CheckMode.VECTORS);
-    rules.put(ScanBatch.class, CheckMode.VECTORS);
-    rules.put(ProjectRecordBatch.class, CheckMode.VECTORS);
-    rules.put(FilterRecordBatch.class, CheckMode.VECTORS);
-    rules.put(PartitionLimitRecordBatch.class, CheckMode.VECTORS);
-    rules.put(UnnestRecordBatch.class, CheckMode.VECTORS);
-    rules.put(HashAggBatch.class, CheckMode.VECTORS);
-    rules.put(RemovingRecordBatch.class, CheckMode.VECTORS);
-    rules.put(StreamingAggBatch.class, CheckMode.VECTORS);
-    rules.put(RuntimeFilterRecordBatch.class, CheckMode.VECTORS);
-    rules.put(FlattenRecordBatch.class, CheckMode.VECTORS);
-    rules.put(MergeJoinBatch.class, CheckMode.VECTORS);
-    rules.put(NestedLoopJoinBatch.class, CheckMode.VECTORS);
-    rules.put(LimitRecordBatch.class, CheckMode.VECTORS);
-    rules.put(MergingRecordBatch.class, CheckMode.VECTORS);
-    rules.put(OrderedPartitionRecordBatch.class, CheckMode.VECTORS);
-    rules.put(RangePartitionRecordBatch.class, CheckMode.VECTORS);
-    rules.put(TraceRecordBatch.class, CheckMode.VECTORS);
-    rules.put(UnionAllRecordBatch.class, CheckMode.VECTORS);
-    rules.put(UnorderedReceiverBatch.class, CheckMode.VECTORS);
-    rules.put(UnpivotMapsRecordBatch.class, CheckMode.VECTORS);
-    rules.put(WindowFrameRecordBatch.class, CheckMode.VECTORS);
-    rules.put(TopNBatch.class, CheckMode.VECTORS);
-    rules.put(HashJoinBatch.class, CheckMode.VECTORS);
-    rules.put(ExternalSortBatch.class, CheckMode.VECTORS);
-    rules.put(WriterRecordBatch.class, CheckMode.VECTORS);
-    rules.put(MetadataStreamAggBatch.class, CheckMode.VECTORS);
-    rules.put(MetadataHashAggBatch.class, CheckMode.VECTORS);
-    rules.put(MetadataHandlerBatch.class, CheckMode.VECTORS);
-    rules.put(MetadataControllerBatch.class, CheckMode.VECTORS);
-    return rules;
-  }
-
-  private static CheckMode lookup(Object subject) {
-    CheckMode checkMode = checkRules.get(subject.getClass());
-    return checkMode == null ? CheckMode.NONE : checkMode;
-  }
-
   public static boolean validate(RecordBatch batch) {
-    // This is a handy place to trace batches as they flow up
-    // the DAG. Works best for single-threaded runs with few records.
-    // System.out.println(batch.getClass().getSimpleName());
-    // RowSetFormatter.print(batch);
-
-    CheckMode checkMode = lookup(batch);
-
-    // If no rule, don't check this batch.
-
-    if (checkMode == CheckMode.NONE) {
-
-      // As work proceeds, might want to log those batches not checked.
-      // For now, there are too many.
-
-      return true;
-    }
-
-    // All batches that do any checks will at least check counts.
-
     ErrorReporter reporter = errorReporter(batch);
     int rowCount = batch.getRecordCount();
     int valueCount = rowCount;
@@ -340,9 +230,7 @@ public class BatchValidator {
         break;
       }
     }
-    if (checkMode == CheckMode.VECTORS) {
-      new BatchValidator(reporter).validateBatch(batch, valueCount);
-    }
+    new BatchValidator(reporter).validateBatch(batch, valueCount);
     return reporter.errorCount() == 0;
   }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/window/WindowFrameRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/window/WindowFrameRecordBatch.java
index 6ed004f..07a5c76 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/window/WindowFrameRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/window/WindowFrameRecordBatch.java
@@ -18,6 +18,7 @@
 package org.apache.drill.exec.physical.impl.window;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
 
@@ -47,7 +48,6 @@ import org.apache.drill.exec.record.VectorAccessible;
 import org.apache.drill.exec.record.VectorWrapper;
 import org.apache.drill.exec.vector.ValueVector;
 import org.apache.drill.shaded.guava.com.google.common.collect.Iterables;
-import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -64,7 +64,7 @@ public class WindowFrameRecordBatch extends AbstractRecordBatch<WindowPOP> {
   private List<WindowDataBatch> batches;
 
   private WindowFramer[] framers;
-  private final List<WindowFunction> functions = Lists.newArrayList();
+  private final List<WindowFunction> functions = new ArrayList<>();
 
   private boolean noMoreBatches; // true when downstream returns NONE
   private BatchSchema schema;
@@ -75,7 +75,7 @@ public class WindowFrameRecordBatch extends AbstractRecordBatch<WindowPOP> {
       RecordBatch incoming) throws OutOfMemoryException {
     super(popConfig, context);
     this.incoming = incoming;
-    batches = Lists.newArrayList();
+    batches = new ArrayList<>();
   }
 
   /**
@@ -260,17 +260,15 @@ public class WindowFrameRecordBatch extends AbstractRecordBatch<WindowPOP> {
 
     logger.trace("creating framer(s)");
 
-    List<LogicalExpression> keyExprs = Lists.newArrayList();
-    List<LogicalExpression> orderExprs = Lists.newArrayList();
+    List<LogicalExpression> keyExprs = new ArrayList<>();
+    List<LogicalExpression> orderExprs = new ArrayList<>();
     boolean requireFullPartition = false;
 
     boolean useDefaultFrame = false; // at least one window function uses the DefaultFrameTemplate
     boolean useCustomFrame = false; // at least one window function uses the CustomFrameTemplate
 
     // all existing vectors will be transferred to the outgoing container in framer.doWork()
-    for (VectorWrapper<?> wrapper : batch) {
-      container.addOrGet(wrapper.getField());
-    }
+    container.copySchemaFrom(batch);
 
     // add aggregation vectors to the container, and materialize corresponding expressions
     for (NamedExpression ne : popConfig.getAggregations()) {
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorAccessible.java b/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorAccessible.java
index f51f521..03e8ffa 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorAccessible.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorAccessible.java
@@ -21,10 +21,8 @@ import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.exec.record.selection.SelectionVector2;
 import org.apache.drill.exec.record.selection.SelectionVector4;
 
-// TODO javadoc
 public interface VectorAccessible extends Iterable<VectorWrapper<?>> {
   // TODO are these <?> related in any way? Should they be the same one?
-  // TODO javadoc
   VectorWrapper<?> getValueAccessorById(Class<?> clazz, int... fieldIds);
 
   /**
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorContainer.java b/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorContainer.java
index 1cfc61d..3796e5a 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorContainer.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/record/VectorContainer.java
@@ -553,4 +553,10 @@ public class VectorContainer implements VectorAccessible {
     // in the offset vectors that need it.
     setValueCount(0);
   }
+
+  public void copySchemaFrom(VectorAccessible other) {
+    for (VectorWrapper<?> wrapper : other) {
+      addOrGet(wrapper.getField());
+    }
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/easy/json/JSONRecordReader.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/easy/json/JSONRecordReader.java
index da42b27..0ab4181 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/easy/json/JSONRecordReader.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/easy/json/JSONRecordReader.java
@@ -79,8 +79,8 @@ public class JSONRecordReader extends AbstractRecordReader {
    * @param columns  pathnames of columns/subfields to read
    * @throws OutOfMemoryException
    */
-  public JSONRecordReader(final FragmentContext fragmentContext, final Path inputPath, final DrillFileSystem fileSystem,
-      final List<SchemaPath> columns) throws OutOfMemoryException {
+  public JSONRecordReader(FragmentContext fragmentContext, Path inputPath, DrillFileSystem fileSystem,
+      List<SchemaPath> columns) throws OutOfMemoryException {
     this(fragmentContext, inputPath, null, fileSystem, columns);
   }
 
@@ -137,15 +137,15 @@ public class JSONRecordReader extends AbstractRecordReader {
   }
 
   @Override
-  public void setup(final OperatorContext context, final OutputMutator output) throws ExecutionSetupException {
+  public void setup(OperatorContext context, OutputMutator output) throws ExecutionSetupException {
     try{
       if (hadoopPath != null) {
-        this.stream = fileSystem.openPossiblyCompressedStream(hadoopPath);
+        stream = fileSystem.openPossiblyCompressedStream(hadoopPath);
       }
 
-      this.writer = new VectorContainerWriter(output, unionEnabled);
+      writer = new VectorContainerWriter(output, unionEnabled);
       if (isSkipQuery()) {
-        this.jsonReader = new CountingJsonReader(fragmentContext.getManagedBuffer(), enableNanInf, enableEscapeAnyChar);
+        jsonReader = new CountingJsonReader(fragmentContext.getManagedBuffer(), enableNanInf, enableEscapeAnyChar);
       } else {
         this.jsonReader = new JsonReader.Builder(fragmentContext.getManagedBuffer())
             .schemaPathColumns(ImmutableList.copyOf(getColumns()))
@@ -157,7 +157,7 @@ public class JSONRecordReader extends AbstractRecordReader {
             .build();
       }
       setupParser();
-    } catch (final Exception e){
+    } catch (Exception e){
       handleAndRaise("Failure reading JSON file", e);
     }
   }
@@ -182,7 +182,7 @@ public class JSONRecordReader extends AbstractRecordReader {
     int columnNr = -1;
 
     if (e instanceof JsonParseException) {
-      final JsonParseException ex = (JsonParseException) e;
+      JsonParseException ex = (JsonParseException) e;
       message = ex.getOriginalMessage();
       columnNr = ex.getLocation().getColumnNr();
     }
@@ -226,7 +226,8 @@ public class JSONRecordReader extends AbstractRecordReader {
           }
           ++parseErrorCount;
           if (printSkippedMalformedJSONRecordLineNumber) {
-            logger.debug("Error parsing JSON in " + hadoopPath.getName() + " : line nos :" + (recordCount + parseErrorCount));
+            logger.debug("Error parsing JSON in {}: line: {}",
+                hadoopPath.getName(), recordCount + parseErrorCount);
           }
           if (write == ReadState.JSON_RECORD_PARSE_EOF_ERROR) {
             break;
@@ -254,8 +255,9 @@ public class JSONRecordReader extends AbstractRecordReader {
 
   @Override
   public void close() throws Exception {
-    if(stream != null) {
+    if (stream != null) {
       stream.close();
+      stream = null;
     }
   }
 }
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestJsonReader.java b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestJsonReader.java
index 79aa1d3..04bc67d 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestJsonReader.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestJsonReader.java
@@ -81,6 +81,9 @@ public class TestJsonReader extends BaseTestQuery {
 
   @Test
   public void schemaChange() throws Exception {
+    // Verifies that the schema change does not cause a
+    // crash. A pretty minimal test.
+    // TODO: Verify actual results.
     test("select b from dfs.`vector/complex/writer/schemaChange/`");
   }
 
@@ -267,12 +270,15 @@ public class TestJsonReader extends BaseTestQuery {
 
   @Test
   public void testAllTextMode() throws Exception {
-    test("alter system set `store.json.all_text_mode` = true");
-    String[] queries = {"select * from cp.`store/json/schema_change_int_to_string.json`"};
-    long[] rowCounts = {3};
-    String filename = "/store/json/schema_change_int_to_string.json";
-    runTestsOnFile(filename, UserBitShared.QueryType.SQL, queries, rowCounts);
-    test("alter system set `store.json.all_text_mode` = false");
+    try {
+      alterSession(ExecConstants.JSON_ALL_TEXT_MODE, true);
+      String[] queries = {"select * from cp.`store/json/schema_change_int_to_string.json`"};
+      long[] rowCounts = {3};
+      String filename = "/store/json/schema_change_int_to_string.json";
+      runTestsOnFile(filename, UserBitShared.QueryType.SQL, queries, rowCounts);
+    } finally {
+      resetSessionOption(ExecConstants.JSON_ALL_TEXT_MODE);
+    }
   }
 
   @Test
@@ -293,58 +299,87 @@ public class TestJsonReader extends BaseTestQuery {
 
   @Test
   public void testNullWhereListExpected() throws Exception {
-    test("alter system set `store.json.all_text_mode` = true");
-    String[] queries = {"select * from cp.`store/json/null_where_list_expected.json`"};
-    long[] rowCounts = {3};
-    String filename = "/store/json/null_where_list_expected.json";
-    runTestsOnFile(filename, UserBitShared.QueryType.SQL, queries, rowCounts);
-    test("alter system set `store.json.all_text_mode` = false");
+    try {
+      alterSession(ExecConstants.JSON_ALL_TEXT_MODE, true);
+      String[] queries = {"select * from cp.`store/json/null_where_list_expected.json`"};
+      long[] rowCounts = {3};
+      String filename = "/store/json/null_where_list_expected.json";
+      runTestsOnFile(filename, UserBitShared.QueryType.SQL, queries, rowCounts);
+    }
+    finally {
+      resetSessionOption(ExecConstants.JSON_ALL_TEXT_MODE);
+    }
   }
 
   @Test
   public void testNullWhereMapExpected() throws Exception {
-    test("alter system set `store.json.all_text_mode` = true");
-    String[] queries = {"select * from cp.`store/json/null_where_map_expected.json`"};
-    long[] rowCounts = {3};
-    String filename = "/store/json/null_where_map_expected.json";
-    runTestsOnFile(filename, UserBitShared.QueryType.SQL, queries, rowCounts);
-    test("alter system set `store.json.all_text_mode` = false");
+    try {
+      alterSession(ExecConstants.JSON_ALL_TEXT_MODE, true);
+      String[] queries = {"select * from cp.`store/json/null_where_map_expected.json`"};
+      long[] rowCounts = {3};
+      String filename = "/store/json/null_where_map_expected.json";
+      runTestsOnFile(filename, UserBitShared.QueryType.SQL, queries, rowCounts);
+    }
+    finally {
+      resetSessionOption(ExecConstants.JSON_ALL_TEXT_MODE);
+    }
   }
 
   @Test
   public void ensureProjectionPushdown() throws Exception {
-    // Tests to make sure that we are correctly eliminating schema changing columns.  If completes, means that the projection pushdown was successful.
-    test("alter system set `store.json.all_text_mode` = false; "
-        + "select  t.field_1, t.field_3.inner_1, t.field_3.inner_2, t.field_4.inner_1 "
-        + "from cp.`store/json/schema_change_int_to_string.json` t");
+    try {
+      // Tests to make sure that we are correctly eliminating schema changing
+      // columns. If completes, means that the projection pushdown was
+      // successful.
+      test("alter system set `store.json.all_text_mode` = false; "
+          + "select  t.field_1, t.field_3.inner_1, t.field_3.inner_2, t.field_4.inner_1 "
+          + "from cp.`store/json/schema_change_int_to_string.json` t");
+    } finally {
+      resetSessionOption(ExecConstants.JSON_ALL_TEXT_MODE);
+    }
   }
 
-  // The project pushdown rule is correctly adding the projected columns to the scan, however it is not removing
-  // the redundant project operator after the scan, this tests runs a physical plan generated from one of the tests to
-  // ensure that the project is filtering out the correct data in the scan alone
+  // The project pushdown rule is correctly adding the projected columns to the
+  // scan, however it is not removing the redundant project operator after the
+  // scan, this tests runs a physical plan generated from one of the tests to
+  // ensure that the project is filtering out the correct data in the scan alone.
   @Test
   public void testProjectPushdown() throws Exception {
-    String[] queries = {Files.asCharSource(DrillFileUtils.getResourceAsFile("/store/json/project_pushdown_json_physical_plan.json"), Charsets.UTF_8).read()};
-    long[] rowCounts = {3};
-    String filename = "/store/json/schema_change_int_to_string.json";
-    test("alter system set `store.json.all_text_mode` = false");
-    runTestsOnFile(filename, UserBitShared.QueryType.PHYSICAL, queries, rowCounts);
-
-    List<QueryDataBatch> results = testPhysicalWithResults(queries[0]);
-    assertEquals(1, results.size());
-    // "`field_1`", "`field_3`.`inner_1`", "`field_3`.`inner_2`", "`field_4`.`inner_1`"
-
-    RecordBatchLoader batchLoader = new RecordBatchLoader(getAllocator());
-    QueryDataBatch batch = results.get(0);
-    assertTrue(batchLoader.load(batch.getHeader().getDef(), batch.getData()));
-
-    // this used to be five.  It is now three.  This is because the plan doesn't have a project.
-    // Scanners are not responsible for projecting non-existent columns (as long as they project one column)
-    assertEquals(3, batchLoader.getSchema().getFieldCount());
-    testExistentColumns(batchLoader);
-
-    batch.release();
-    batchLoader.clear();
+    try {
+      String[] queries = {Files.asCharSource(DrillFileUtils.getResourceAsFile(
+          "/store/json/project_pushdown_json_physical_plan.json"), Charsets.UTF_8).read()};
+      String filename = "/store/json/schema_change_int_to_string.json";
+      alterSession(ExecConstants.JSON_ALL_TEXT_MODE, false);
+      long[] rowCounts = {3};
+      runTestsOnFile(filename, UserBitShared.QueryType.PHYSICAL, queries, rowCounts);
+
+      List<QueryDataBatch> results = testPhysicalWithResults(queries[0]);
+      assertEquals(1, results.size());
+      // "`field_1`", "`field_3`.`inner_1`", "`field_3`.`inner_2`", "`field_4`.`inner_1`"
+
+      RecordBatchLoader batchLoader = new RecordBatchLoader(getAllocator());
+      QueryDataBatch batch = results.get(0);
+      assertTrue(batchLoader.load(batch.getHeader().getDef(), batch.getData()));
+
+      // this used to be five. It is now four. This is because the plan doesn't
+      // have a project. Scanners are not responsible for projecting non-existent
+      // columns (as long as they project one column)
+      //
+      // That said, the JSON format plugin does claim it can do project
+      // push-down, which means it will ensure columns for any column
+      // mentioned in the project list, in a form consistent with the schema
+      // path. In this case, `non_existent`.`nested`.`field` appears in
+      // the query. But, even more oddly, the missing field is inserted only
+      // if all text mode is true, omitted if all text mode is false.
+      // Seems overly complex.
+      assertEquals(3, batchLoader.getSchema().getFieldCount());
+      testExistentColumns(batchLoader);
+
+      batch.release();
+      batchLoader.clear();
+    } finally {
+      resetSessionOption(ExecConstants.JSON_ALL_TEXT_MODE);
+    }
   }
 
   @Test
@@ -360,32 +395,32 @@ public class TestJsonReader extends BaseTestQuery {
 
   private void testExistentColumns(RecordBatchLoader batchLoader) throws SchemaChangeException {
     VectorWrapper<?> vw = batchLoader.getValueAccessorById(
-        RepeatedBigIntVector.class, //
-        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_1")).getFieldIds() //
+        RepeatedBigIntVector.class,
+        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_1")).getFieldIds()
     );
     assertEquals("[1]", vw.getValueVector().getAccessor().getObject(0).toString());
     assertEquals("[5]", vw.getValueVector().getAccessor().getObject(1).toString());
     assertEquals("[5,10,15]", vw.getValueVector().getAccessor().getObject(2).toString());
 
     vw = batchLoader.getValueAccessorById(
-        IntVector.class, //
-        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_3", "inner_1")).getFieldIds() //
+        IntVector.class,
+        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_3", "inner_1")).getFieldIds()
     );
     assertNull(vw.getValueVector().getAccessor().getObject(0));
     assertEquals(2l, vw.getValueVector().getAccessor().getObject(1));
     assertEquals(5l, vw.getValueVector().getAccessor().getObject(2));
 
     vw = batchLoader.getValueAccessorById(
-        IntVector.class, //
-        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_3", "inner_2")).getFieldIds() //
+        IntVector.class,
+        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_3", "inner_2")).getFieldIds()
     );
     assertNull(vw.getValueVector().getAccessor().getObject(0));
     assertNull(vw.getValueVector().getAccessor().getObject(1));
     assertEquals(3l, vw.getValueVector().getAccessor().getObject(2));
 
     vw = batchLoader.getValueAccessorById(
-        RepeatedBigIntVector.class, //
-        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_4", "inner_1")).getFieldIds() //
+        RepeatedBigIntVector.class,
+        batchLoader.getValueVectorId(SchemaPath.getCompoundPath("field_4", "inner_1")).getFieldIds()
     );
     assertEquals("[]", vw.getValueVector().getAccessor().getObject(0).toString());
     assertEquals("[1,2,3]", vw.getValueVector().getAccessor().getObject(1).toString());
@@ -440,7 +475,7 @@ public class TestJsonReader extends BaseTestQuery {
                       )
               ).go();
     } finally {
-      testNoResult("alter session set `exec.enable_union_type` = false");
+      resetSessionOption(ExecConstants.ENABLE_UNION_TYPE_KEY);
     }
   }
 
@@ -457,7 +492,7 @@ public class TestJsonReader extends BaseTestQuery {
               .baselineValues(13L, "BIGINT")
               .go();
     } finally {
-      testNoResult("alter session set `exec.enable_union_type` = false");
+      resetSessionOption(ExecConstants.ENABLE_UNION_TYPE_KEY);
     }
   }
 
@@ -477,7 +512,7 @@ public class TestJsonReader extends BaseTestQuery {
               .baselineValues(3L)
               .go();
     } finally {
-      testNoResult("alter session set `exec.enable_union_type` = false");
+      resetSessionOption(ExecConstants.ENABLE_UNION_TYPE_KEY);
     }
   }
 
@@ -495,7 +530,7 @@ public class TestJsonReader extends BaseTestQuery {
               .baselineValues(9L)
               .go();
     } finally {
-      testNoResult("alter session set `exec.enable_union_type` = false");
+      resetSessionOption(ExecConstants.ENABLE_UNION_TYPE_KEY);
     }
   }
 
@@ -512,7 +547,7 @@ public class TestJsonReader extends BaseTestQuery {
               .baselineValues(11.0)
               .go();
     } finally {
-      testNoResult("alter session set `exec.enable_union_type` = false");
+      resetSessionOption(ExecConstants.ENABLE_UNION_TYPE_KEY);
     }
   }
 
@@ -536,7 +571,7 @@ public class TestJsonReader extends BaseTestQuery {
               .baselineValues(20000L)
               .go();
     } finally {
-      testNoResult("alter session set `exec.enable_union_type` = false");
+      resetSessionOption(ExecConstants.ENABLE_UNION_TYPE_KEY);
     }
   }
 
@@ -565,7 +600,7 @@ public class TestJsonReader extends BaseTestQuery {
               .baselineValues(20000L)
               .go();
     } finally {
-      testNoResult("alter session set `exec.enable_union_type` = false");
+      resetSessionOption(ExecConstants.ENABLE_UNION_TYPE_KEY);
     }
   }
 
@@ -628,7 +663,7 @@ public class TestJsonReader extends BaseTestQuery {
         .go();
 
     } finally {
-      testNoResult("alter session set `store.json.all_text_mode` = false");
+      resetSessionOption(ExecConstants.JSON_ALL_TEXT_MODE);
     }
   }
 


[drill] 08/11: DRILL-7208: Reuse root git.properties file

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit 5655dbbbd8492f3a037b2f3b1fa670f3b71ae66a
Author: Volodymyr Vysotskyi <vv...@gmail.com>
AuthorDate: Thu Nov 28 20:25:57 2019 +0200

    DRILL-7208: Reuse root git.properties file
    
    - Generate git.properties for root module only and copy it to child modules when required
    
    closes #1911
---
 contrib/storage-kudu/.gitignore | 15 -------
 pom.xml                         | 95 +++++++++++++++++++++++------------------
 2 files changed, 53 insertions(+), 57 deletions(-)

diff --git a/contrib/storage-kudu/.gitignore b/contrib/storage-kudu/.gitignore
deleted file mode 100644
index f290bae..0000000
--- a/contrib/storage-kudu/.gitignore
+++ /dev/null
@@ -1,15 +0,0 @@
-.project
-.buildpath
-.classpath
-.checkstyle
-.settings/
-.idea/
-TAGS
-*.log
-*.lck
-*.iml
-target/
-*.DS_Store
-*.patch
-*~
-git.properties
diff --git a/pom.xml b/pom.xml
index cbc3548..bcb255b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -463,13 +463,66 @@
           </execution>
         </executions>
       </plugin>
+      <plugin>
+        <groupId>pl.project13.maven</groupId>
+        <artifactId>git-commit-id-plugin</artifactId>
+        <version>4.0.0</version>
+        <executions>
+          <execution>
+            <id>for-source-tarball</id>
+            <goals>
+              <goal>revision</goal>
+            </goals>
+            <inherited>false</inherited>
+            <configuration>
+              <generateGitPropertiesFilename>./git.properties</generateGitPropertiesFilename>
+            </configuration>
+          </execution>
+        </executions>
 
+        <configuration>
+          <dateFormat>dd.MM.yyyy '@' HH:mm:ss z</dateFormat>
+          <verbose>false</verbose>
+          <skipPoms>false</skipPoms>
+          <generateGitPropertiesFile>true</generateGitPropertiesFile>
+          <failOnNoGitDirectory>false</failOnNoGitDirectory>
+          <gitDescribe>
+            <skip>false</skip>
+            <always>false</always>
+            <abbrev>7</abbrev>
+            <dirty>-dirty</dirty>
+            <forceLongFormat>true</forceLongFormat>
+          </gitDescribe>
+        </configuration>
+      </plugin>
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-resources-plugin</artifactId>
         <configuration>
           <encoding>UTF-8</encoding>
         </configuration>
+        <executions>
+          <execution>
+            <!-- copy root git.properties file to target/classes folder for every module
+                to ensure that it will be placed into jar -->
+            <phase>initialize</phase>
+            <goals>
+              <goal>copy-resources</goal>
+            </goals>
+            <configuration>
+              <outputDirectory>${project.build.outputDirectory}</outputDirectory>
+              <resources>
+                <resource>
+                  <!--suppress UnresolvedMavenProperty -->
+                  <directory>${maven.multiModuleProjectDirectory}</directory>
+                  <includes>
+                    <include>git.properties</include>
+                  </includes>
+                </resource>
+              </resources>
+            </configuration>
+          </execution>
+        </executions>
       </plugin>
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
@@ -527,48 +580,6 @@
           </execution>
         </executions>
       </plugin>
-      <plugin>
-        <groupId>pl.project13.maven</groupId>
-        <artifactId>git-commit-id-plugin</artifactId>
-        <version>2.2.5</version>
-        <executions>
-          <execution>
-            <id>for-jars</id>
-            <inherited>true</inherited>
-            <goals>
-              <goal>revision</goal>
-            </goals>
-            <configuration>
-              <generateGitPropertiesFilename>target/classes/git.properties</generateGitPropertiesFilename>
-            </configuration>
-          </execution>
-          <execution>
-            <id>for-source-tarball</id>
-            <goals>
-              <goal>revision</goal>
-            </goals>
-            <inherited>false</inherited>
-            <configuration>
-              <generateGitPropertiesFilename>./git.properties</generateGitPropertiesFilename>
-            </configuration>
-          </execution>
-        </executions>
-
-        <configuration>
-          <dateFormat>dd.MM.yyyy '@' HH:mm:ss z</dateFormat>
-          <verbose>false</verbose>
-          <skipPoms>false</skipPoms>
-          <generateGitPropertiesFile>true</generateGitPropertiesFile>
-          <failOnNoGitDirectory>false</failOnNoGitDirectory>
-          <gitDescribe>
-            <skip>false</skip>
-            <always>false</always>
-            <abbrev>7</abbrev>
-            <dirty>-dirty</dirty>
-            <forceLongFormat>true</forceLongFormat>
-          </gitDescribe>
-        </configuration>
-      </plugin>
     </plugins>
     <pluginManagement>
       <plugins>


[drill] 04/11: DRILL-5844: Incorrect values of TABLE_TYPE returned from method DatabaseMetaData.getTables of JDBC API

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit de41559e748b7c44139cdd0a3eefef05085570a8
Author: Arjun Gupta <ar...@gmail.com>
AuthorDate: Wed Nov 20 15:37:33 2019 +0530

    DRILL-5844: Incorrect values of TABLE_TYPE returned from method DatabaseMetaData.getTables of JDBC API
    
    closes #1904
---
 .../apache/drill/exec/store/jdbc/TestJdbcPluginWithH2IT.java | 11 +++++++++++
 .../drill/exec/store/jdbc/TestJdbcPluginWithMySQLIT.java     | 12 ++++++++++++
 .../org/apache/drill/exec/store/ischema/FilterEvaluator.java |  2 +-
 .../org/apache/drill/exec/store/ischema/RecordCollector.java |  2 +-
 .../drill/exec/work/metadata/TestMetadataProvider.java       |  2 +-
 .../java/org/apache/drill/jdbc/test/TestJdbcMetadata.java    |  2 +-
 6 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithH2IT.java b/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithH2IT.java
index 1da9cf0..dacf028 100644
--- a/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithH2IT.java
+++ b/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithH2IT.java
@@ -239,4 +239,15 @@ public class TestJdbcPluginWithH2IT extends ClusterTest {
     String plan = queryBuilder().sql(query).explainJson();
     assertEquals(5, queryBuilder().physical(plan).run().recordCount());
   }
+
+  @Test
+  public void testJdbcTableTypes() throws Exception {
+    String query = "select distinct table_type from information_schema.`tables`";
+    testBuilder()
+        .sqlQuery(query)
+        .unOrdered()
+        .baselineColumns("table_type")
+        .baselineValuesForSingleColumn("SYSTEM TABLE", "TABLE")
+        .go();
+  }
 }
diff --git a/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithMySQLIT.java b/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithMySQLIT.java
index 0ff6894..bd065ca 100644
--- a/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithMySQLIT.java
+++ b/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithMySQLIT.java
@@ -329,4 +329,16 @@ public class TestJdbcPluginWithMySQLIT extends ClusterTest {
     String query = "select * from information_schema.`views`";
     run(query);
   }
+
+  @Test
+  public void testJdbcTableTypes() throws Exception {
+    String query = "select distinct table_type from information_schema.`tables` " +
+        "where table_schema like 'mysql%'";
+    testBuilder()
+        .sqlQuery(query)
+        .unOrdered()
+        .baselineColumns("table_type")
+        .baselineValuesForSingleColumn("SYSTEM VIEW", "TABLE", "VIEW")
+        .go();
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/FilterEvaluator.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/FilterEvaluator.java
index dc93734..34219fb 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/FilterEvaluator.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/FilterEvaluator.java
@@ -179,7 +179,7 @@ public interface FilterEvaluator {
         SHRD_COL_TABLE_SCHEMA, schemaName,
         SCHS_COL_SCHEMA_NAME, schemaName,
         SHRD_COL_TABLE_NAME, tableName,
-        TBLS_COL_TABLE_TYPE, tableType.toString());
+        TBLS_COL_TABLE_TYPE, tableType.jdbcName);
 
       return filter.evaluate(recordValues) != InfoSchemaFilter.Result.FALSE;
     }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/RecordCollector.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/RecordCollector.java
index 92f1d14..48c85bf 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/RecordCollector.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/ischema/RecordCollector.java
@@ -158,7 +158,7 @@ public interface RecordCollector {
 
       return drillSchema.getTableNamesAndTypes().stream()
         .filter(entry -> filterEvaluator.shouldVisitTable(schemaPath, entry.getKey(), entry.getValue()))
-        .map(entry -> new Records.Table(IS_CATALOG_NAME, schemaPath, entry.getKey(), entry.getValue().toString()))
+        .map(entry -> new Records.Table(IS_CATALOG_NAME, schemaPath, entry.getKey(), entry.getValue().jdbcName))
         .collect(Collectors.toList());
     }
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/work/metadata/TestMetadataProvider.java b/exec/java-exec/src/test/java/org/apache/drill/exec/work/metadata/TestMetadataProvider.java
index 21503de..120d6dc 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/work/metadata/TestMetadataProvider.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/work/metadata/TestMetadataProvider.java
@@ -179,7 +179,7 @@ public class TestMetadataProvider extends BaseTestQuery {
   public void tablesWithSystemTableFilter() throws Exception {
     // test("SELECT * FROM INFORMATION_SCHEMA.`TABLES` WHERE TABLE_TYPE IN ('SYSTEM_TABLE')"); // SQL equivalent
 
-    GetTablesResp resp = client.getTables(null, null, null, Collections.singletonList("SYSTEM_TABLE")).get();
+    GetTablesResp resp = client.getTables(null, null, null, Collections.singletonList("SYSTEM TABLE")).get();
 
     assertEquals(RequestStatus.OK, resp.getStatus());
     List<TableMetadata> tables = resp.getTablesList();
diff --git a/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/TestJdbcMetadata.java b/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/TestJdbcMetadata.java
index e7ce62e..990f068 100644
--- a/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/TestJdbcMetadata.java
+++ b/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/TestJdbcMetadata.java
@@ -78,7 +78,7 @@ public class TestJdbcMetadata extends JdbcTestActionBase {
     this.testAction(new JdbcAction(){
       @Override
       public ResultSet getResult(Connection c) throws SQLException {
-        return c.getMetaData().getTables("DRILL", "sys", "opt%", new String[]{"SYSTEM_TABLE", "SYSTEM_VIEW"});
+        return c.getMetaData().getTables("DRILL", "sys", "opt%", new String[]{"SYSTEM TABLE", "SYSTEM_VIEW"});
       }
     }, 2);
   }


[drill] 09/11: DRILL-6904: Update maven-javadoc-plugin, maven-compiler-plugin and maven-assembly-plugin to the latest version

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit 2b9a25c40554a51912d2a5d7372f13632d60eb96
Author: Volodymyr Vysotskyi <vv...@gmail.com>
AuthorDate: Fri Nov 29 16:27:52 2019 +0200

    DRILL-6904: Update maven-javadoc-plugin, maven-compiler-plugin and maven-assembly-plugin to the latest version
---
 pom.xml | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/pom.xml b/pom.xml
index bcb255b..3ea8a95 100644
--- a/pom.xml
+++ b/pom.xml
@@ -722,7 +722,7 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-compiler-plugin</artifactId>
-          <version>3.8.0</version>
+          <version>3.8.1</version>
         </plugin>
         <plugin>
           <artifactId>maven-enforcer-plugin</artifactId>
@@ -915,7 +915,7 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-assembly-plugin</artifactId>
-          <version>3.1.0</version>
+          <version>3.2.0</version>
         </plugin>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
@@ -925,10 +925,7 @@
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-javadoc-plugin</artifactId>
-          <version>2.10.4</version> <!--the 3.0.1 version causes failures on the release:prepare stage-->
-          <configuration>
-            <additionalparam>-Xdoclint:none</additionalparam>
-          </configuration>
+          <version>3.1.1</version>
         </plugin>
 
         <!--Note: apache-21.pom has the latest versions of apache-rat-plugin, maven-dependency-plugin and


[drill] 07/11: DRILL-7463: Apache license is not added to the generated classes

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit 364a4d35af2a6efd8d658757f69086af8fca77c1
Author: Volodymyr Vysotskyi <vv...@gmail.com>
AuthorDate: Wed Dec 4 11:43:29 2019 +0200

    DRILL-7463: Apache license is not added to the generated classes
    
    closes #1916
---
 .../core/src/main/codegen/includes/license.ftl          | 17 +++++++++++++++++
 exec/java-exec/src/main/codegen/includes/license.ftl    | 17 +++++++++++++++++
 exec/vector/src/main/codegen/includes/license.ftl       | 17 +++++++++++++++++
 3 files changed, 51 insertions(+)

diff --git a/contrib/storage-hive/core/src/main/codegen/includes/license.ftl b/contrib/storage-hive/core/src/main/codegen/includes/license.ftl
index 9faaa72..cae33a4 100644
--- a/contrib/storage-hive/core/src/main/codegen/includes/license.ftl
+++ b/contrib/storage-hive/core/src/main/codegen/includes/license.ftl
@@ -17,3 +17,20 @@
     limitations under the License.
 
 -->
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
diff --git a/exec/java-exec/src/main/codegen/includes/license.ftl b/exec/java-exec/src/main/codegen/includes/license.ftl
index 9faaa72..cae33a4 100644
--- a/exec/java-exec/src/main/codegen/includes/license.ftl
+++ b/exec/java-exec/src/main/codegen/includes/license.ftl
@@ -17,3 +17,20 @@
     limitations under the License.
 
 -->
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
diff --git a/exec/vector/src/main/codegen/includes/license.ftl b/exec/vector/src/main/codegen/includes/license.ftl
index 9faaa72..cae33a4 100644
--- a/exec/vector/src/main/codegen/includes/license.ftl
+++ b/exec/vector/src/main/codegen/includes/license.ftl
@@ -17,3 +17,20 @@
     limitations under the License.
 
 -->
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */


[drill] 03/11: DRILL-6540: Updated Hadoop and HBase libraries to the latest versions

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit db6488238a912fefb5cc5d9e83f5575df51a74e9
Author: Anton Gozhiy <an...@gmail.com>
AuthorDate: Mon Nov 4 14:08:22 2019 +0200

    DRILL-6540: Updated Hadoop and HBase libraries to the latest versions
    
    Hadoop: 3.2.1
    HBase: 2.2.2
    
    closes #1895
---
 .../org/apache/drill/common/util/GuavaPatcher.java |  17 +++-
 distribution/pom.xml                               |  79 +++++++++++++++----
 distribution/src/assemble/component.xml            |  81 +++++++++----------
 distribution/src/{ => main}/resources/LICENSE      |   0
 distribution/src/{ => main}/resources/NOTICE       |   0
 distribution/src/{ => main}/resources/README.md    |   0
 .../src/{ => main}/resources/auto-setup.sh         |   0
 .../src/{ => main}/resources/core-site-example.xml |   0
 .../src/{ => main}/resources/distrib-env.sh        |   0
 .../src/{ => main}/resources/distrib-setup.sh      |   0
 .../src/{ => main}/resources/drill-am-log.xml      |   4 +-
 distribution/src/{ => main}/resources/drill-am.sh  |   0
 distribution/src/{ => main}/resources/drill-conf   |   0
 .../src/{ => main}/resources/drill-config.sh       |   0
 .../src/{ => main}/resources/drill-embedded        |   0
 .../src/{ => main}/resources/drill-embedded.bat    |   0
 distribution/src/{ => main}/resources/drill-env.sh |   0
 .../src/{ => main}/resources/drill-localhost       |   0
 .../resources/drill-on-yarn-example.conf           |   0
 .../src/{ => main}/resources/drill-on-yarn.sh      |   0
 .../resources/drill-override-example.conf          |  48 ++++++------
 .../src/{ => main}/resources/drill-override.conf   |   4 +-
 .../src/{ => main}/resources/drill-setup.sh        |   0
 .../resources/drill-sqlline-override-example.conf  |   0
 distribution/src/{ => main}/resources/drillbit     |   0
 distribution/src/{ => main}/resources/drillbit.sh  |   0
 distribution/src/{ => main}/resources/dumpcat      |   0
 .../src/{ => main}/resources/hadoop-excludes.txt   |   0
 distribution/src/{ => main}/resources/logback.xml  |   8 +-
 distribution/src/{ => main}/resources/runbit       |   0
 .../src/{ => main}/resources/saffron.properties    |   0
 distribution/src/{ => main}/resources/sqlline      |   0
 distribution/src/{ => main}/resources/sqlline.bat  |   1 +
 .../storage-plugins-override-example.conf          |   0
 distribution/src/{ => main}/resources/submit_plan  |   0
 .../src/main/resources/winutils/hadoop.dll         | Bin 0 -> 85504 bytes
 .../src/main/resources/winutils/winutils.exe       | Bin 0 -> 112640 bytes
 .../src/{ => main}/resources/yarn-client-log.xml   |   2 +-
 .../src/{ => main}/resources/yarn-drillbit.sh      |   0
 docs/dev/HadoopWinutils.md                         |  11 +++
 exec/java-exec/pom.xml                             |  20 +----
 .../apache/commons/logging/impl/Log4JLogger.java   |  25 ++++++
 .../drill/exec/store/LocalSyncableFileSystem.java  |   8 +-
 .../exec/physical/unit/TestOutputBatchSize.java    |   5 +-
 .../org/apache/drill/exec/work/batch/FileTest.java |   4 +-
 exec/jdbc-all/pom.xml                              |   2 +-
 .../org/apache/drill/jdbc/DrillbitClassLoader.java |  40 +---------
 .../org/apache/drill/jdbc/ITTestShadedJar.java     |   1 -
 pom.xml                                            |  86 +++++++++++++++++----
 49 files changed, 269 insertions(+), 177 deletions(-)

diff --git a/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java b/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java
index 8eed12d..5924df8 100644
--- a/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java
+++ b/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java
@@ -135,14 +135,29 @@ public class GuavaPatcher {
               + "      throw new IllegalArgumentException(format(errorMessageTemplate, new Object[] { new Integer(arg1) }));\n"
               + "    }\n"
               + "  }",
+          "public static void checkArgument(boolean expression, String errorMessageTemplate, long arg1) {\n"
+              + "    if (!expression) {\n"
+              + "      throw new IllegalArgumentException(format(errorMessageTemplate, new Object[] { new Long(arg1) }));\n"
+              + "    }\n"
+              + "  }",
+          "public static void checkArgument(boolean expression, String errorMessageTemplate, long arg1, long arg2) {\n"
+              + "    if (!expression) {\n"
+              + "      throw new IllegalArgumentException(format(errorMessageTemplate, new Object[] { new Long(arg1), new Long(arg2)}));\n"
+              + "    }\n"
+              + "  }",
           "public static Object checkNotNull(Object reference, String errorMessageTemplate, int arg1) {\n"
               + "    if (reference == null) {\n"
               + "      throw new NullPointerException(format(errorMessageTemplate, new Object[] { new Integer(arg1) }));\n"
               + "    } else {\n"
               + "      return reference;\n"
               + "    }\n"
+              + "  }",
+          "public static void checkState(boolean expression, String errorMessageTemplate, int arg1) {\n"
+              + "    if (!expression) {\n"
+              + "      throw new IllegalStateException(format(errorMessageTemplate, new Object[] { new Integer(arg1) }));\n"
+              + "    }\n"
               + "  }"
-      );
+    );
 
       List<String> newMethods = IntStream.rangeClosed(startIndex, endIndex)
           .mapToObj(
diff --git a/distribution/pom.xml b/distribution/pom.xml
index 10113bd..2c80d4b 100644
--- a/distribution/pom.xml
+++ b/distribution/pom.xml
@@ -335,15 +335,30 @@
           <name>!alt-hadoop</name>
         </property>
       </activation>
-      <dependencies>
-        <dependency>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-winutils</artifactId>
-          <version>2.7.1</version>
-          <type>zip</type>
-        </dependency>
-      </dependencies>
       <build>
+        <plugins>
+          <plugin>
+            <artifactId>maven-resources-plugin</artifactId>
+            <executions>
+              <execution>
+                <id>copy-winutils</id>
+                <phase>process-resources</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <outputDirectory>${project.build.directory}/winutils</outputDirectory>
+                  <resources>
+                    <resource>
+                      <directory>src/main/resources/winutils</directory>
+                      <filtering>false</filtering>
+                    </resource>
+                  </resources>
+                </configuration>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
       </build>
     </profile>
     <profile>
@@ -361,12 +376,6 @@
           </exclusions>
         </dependency>
         <dependency>
-          <groupId>org.apache.hadoop</groupId>
-          <artifactId>hadoop-winutils</artifactId>
-          <version>2.7.0-mapr-1506</version>
-          <type>zip</type>
-        </dependency>
-        <dependency>
           <groupId>org.apache.hbase</groupId>
           <artifactId>hbase-client</artifactId>
         </dependency>
@@ -392,6 +401,44 @@
         </dependency>
       </dependencies>
       <build>
+        <plugins>
+          <plugin>
+            <artifactId>maven-resources-plugin</artifactId>
+            <executions>
+              <execution>
+                <id>copy-winutils</id>
+                <phase>none</phase>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-dependency-plugin</artifactId>
+            <version>3.1.1</version>
+            <executions>
+              <execution>
+                <id>unpack-winutils</id>
+                <goals>
+                  <goal>unpack</goal>
+                </goals>
+                <phase>process-resources</phase>
+                <configuration>
+                  <artifactItems>
+                    <artifactItem>
+                      <groupId>org.apache.hadoop</groupId>
+                      <artifactId>hadoop-winutils</artifactId>
+                      <version>2.7.0-mapr-1506</version>
+                      <type>zip</type>
+                      <overWrite>true</overWrite>
+                      <outputDirectory>${project.build.directory}/winutils</outputDirectory>
+                      <excludes>**/*.pdb,**/*.lib,**/*.exp</excludes>
+                    </artifactItem>
+                  </artifactItems>
+                </configuration>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
       </build>
     </profile>
     <profile>
@@ -494,7 +541,7 @@
                   <directory>/etc/init.d/</directory>
                   <sources>
                     <source>
-                      <location>src/resources/drillbit</location>
+                      <location>src/main/resources/drillbit</location>
                     </source>
                   </sources>
                   <directoryIncluded>false</directoryIncluded>
@@ -599,7 +646,7 @@
                       </mapper>
                     </data>
                     <data>
-                      <src>src/resources/drillbit</src>
+                      <src>src/main/resources/drillbit</src>
                       <dst>/etc/init.d/drillbit</dst>
                       <type>file</type>
                       <mapper>
diff --git a/distribution/src/assemble/component.xml b/distribution/src/assemble/component.xml
index b8001c0..5dabc89 100644
--- a/distribution/src/assemble/component.xml
+++ b/distribution/src/assemble/component.xml
@@ -196,21 +196,6 @@
       </includes>
       <scope>test</scope>
     </dependencySet>
-    <dependencySet>
-      <outputDirectory>winutils/bin</outputDirectory>
-      <unpack>true</unpack>
-      <unpackOptions>
-        <excludes>
-          <exclude>**/*.pdb</exclude>
-          <exclude>**/*.lib</exclude>
-          <exclude>**/*.exp</exclude>
-        </excludes>
-      </unpackOptions>
-      <useProjectArtifact>false</useProjectArtifact>
-      <includes>
-        <include>org.apache.hadoop:hadoop-winutils</include>
-      </includes>
-    </dependencySet>
   </dependencySets>
 
   <fileSets>
@@ -218,6 +203,10 @@
       <directory>../sample-data</directory>
       <outputDirectory>sample-data</outputDirectory>
     </fileSet>
+    <fileSet>
+      <directory>${project.build.directory}/winutils</directory>
+      <outputDirectory>winutils/bin</outputDirectory>
+    </fileSet>
   </fileSets>
 
   <files>
@@ -226,11 +215,11 @@
       <outputDirectory />
     </file>
     <file>
-      <source>src/resources/LICENSE</source>
+      <source>src/main/resources/LICENSE</source>
       <outputDirectory />
     </file>
     <file>
-      <source>src/resources/README.md</source>
+      <source>src/main/resources/README.md</source>
       <outputDirectory />
     </file>
     <file>
@@ -242,146 +231,146 @@
       <outputDirectory />
     </file>
     <file>
-      <source>src/resources/runbit</source>
+      <source>src/main/resources/runbit</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/hadoop-excludes.txt</source>
+      <source>src/main/resources/hadoop-excludes.txt</source>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drillbit.sh</source>
+      <source>src/main/resources/drillbit.sh</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-conf</source>
+      <source>src/main/resources/drill-conf</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-embedded</source>
+      <source>src/main/resources/drill-embedded</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-embedded.bat</source>
+      <source>src/main/resources/drill-embedded.bat</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-localhost</source>
+      <source>src/main/resources/drill-localhost</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-config.sh</source>
+      <source>src/main/resources/drill-config.sh</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/sqlline</source>
+      <source>src/main/resources/sqlline</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/sqlline.bat</source>
+      <source>src/main/resources/sqlline.bat</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-on-yarn.sh</source>
+      <source>src/main/resources/drill-on-yarn.sh</source>
       <fileMode>0750</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-am.sh</source>
+      <source>src/main/resources/drill-am.sh</source>
       <fileMode>0750</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/yarn-drillbit.sh</source>
+      <source>src/main/resources/yarn-drillbit.sh</source>
       <fileMode>0750</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/submit_plan</source>
+      <source>src/main/resources/submit_plan</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-override.conf</source>
+      <source>src/main/resources/drill-override.conf</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/logback.xml</source>
+      <source>src/main/resources/logback.xml</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/yarn-client-log.xml</source>
+      <source>src/main/resources/yarn-client-log.xml</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/drill-am-log.xml</source>
+      <source>src/main/resources/drill-am-log.xml</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/drill-env.sh</source>
+      <source>src/main/resources/drill-env.sh</source>
       <fileMode>0750</fileMode>
       <outputDirectory>conf</outputDirectory>
     </file>
     <file>
-      <source>src/resources/distrib-env.sh</source>
+      <source>src/main/resources/distrib-env.sh</source>
       <fileMode>0750</fileMode>
       <outputDirectory>conf</outputDirectory>
     </file>
     <file>
-      <source>src/resources/auto-setup.sh</source>
+      <source>src/main/resources/auto-setup.sh</source>
       <fileMode>0755</fileMode>
       <outputDirectory>bin</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-setup.sh</source>
+      <source>src/main/resources/drill-setup.sh</source>
       <fileMode>0750</fileMode>
       <outputDirectory>conf</outputDirectory>
     </file>
     <file>
-      <source>src/resources/distrib-setup.sh</source>
+      <source>src/main/resources/distrib-setup.sh</source>
       <fileMode>0750</fileMode>
       <outputDirectory>conf</outputDirectory>
     </file>
     <file>
-      <source>src/resources/drill-override-example.conf</source>
+      <source>src/main/resources/drill-override-example.conf</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/core-site-example.xml</source>
+      <source>src/main/resources/core-site-example.xml</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/saffron.properties</source>
+      <source>src/main/resources/saffron.properties</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/drill-on-yarn-example.conf</source>
+      <source>src/main/resources/drill-on-yarn-example.conf</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/storage-plugins-override-example.conf</source>
+      <source>src/main/resources/storage-plugins-override-example.conf</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
     <file>
-      <source>src/resources/drill-sqlline-override-example.conf</source>
+      <source>src/main/resources/drill-sqlline-override-example.conf</source>
       <outputDirectory>conf</outputDirectory>
       <fileMode>0640</fileMode>
     </file>
diff --git a/distribution/src/resources/LICENSE b/distribution/src/main/resources/LICENSE
similarity index 100%
rename from distribution/src/resources/LICENSE
rename to distribution/src/main/resources/LICENSE
diff --git a/distribution/src/resources/NOTICE b/distribution/src/main/resources/NOTICE
similarity index 100%
rename from distribution/src/resources/NOTICE
rename to distribution/src/main/resources/NOTICE
diff --git a/distribution/src/resources/README.md b/distribution/src/main/resources/README.md
similarity index 100%
rename from distribution/src/resources/README.md
rename to distribution/src/main/resources/README.md
diff --git a/distribution/src/resources/auto-setup.sh b/distribution/src/main/resources/auto-setup.sh
similarity index 100%
rename from distribution/src/resources/auto-setup.sh
rename to distribution/src/main/resources/auto-setup.sh
diff --git a/distribution/src/resources/core-site-example.xml b/distribution/src/main/resources/core-site-example.xml
similarity index 100%
rename from distribution/src/resources/core-site-example.xml
rename to distribution/src/main/resources/core-site-example.xml
diff --git a/distribution/src/resources/distrib-env.sh b/distribution/src/main/resources/distrib-env.sh
similarity index 100%
rename from distribution/src/resources/distrib-env.sh
rename to distribution/src/main/resources/distrib-env.sh
diff --git a/distribution/src/resources/distrib-setup.sh b/distribution/src/main/resources/distrib-setup.sh
similarity index 100%
rename from distribution/src/resources/distrib-setup.sh
rename to distribution/src/main/resources/distrib-setup.sh
diff --git a/distribution/src/resources/drill-am-log.xml b/distribution/src/main/resources/drill-am-log.xml
similarity index 99%
rename from distribution/src/resources/drill-am-log.xml
rename to distribution/src/main/resources/drill-am-log.xml
index 187ca9d..3d1a9a3 100644
--- a/distribution/src/resources/drill-am-log.xml
+++ b/distribution/src/main/resources/drill-am-log.xml
@@ -27,14 +27,14 @@
  and from there into the YARN-provided output log directory.
 -->
 <configuration>
-   
+
   <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
     <encoder>
       <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
       </pattern>
     </encoder>
   </appender>
-    
+
   <logger name="org.apache.drill" additivity="false">
     <level value="info" />
     <appender-ref ref="STDOUT" />
diff --git a/distribution/src/resources/drill-am.sh b/distribution/src/main/resources/drill-am.sh
similarity index 100%
rename from distribution/src/resources/drill-am.sh
rename to distribution/src/main/resources/drill-am.sh
diff --git a/distribution/src/resources/drill-conf b/distribution/src/main/resources/drill-conf
similarity index 100%
rename from distribution/src/resources/drill-conf
rename to distribution/src/main/resources/drill-conf
diff --git a/distribution/src/resources/drill-config.sh b/distribution/src/main/resources/drill-config.sh
similarity index 100%
rename from distribution/src/resources/drill-config.sh
rename to distribution/src/main/resources/drill-config.sh
diff --git a/distribution/src/resources/drill-embedded b/distribution/src/main/resources/drill-embedded
similarity index 100%
rename from distribution/src/resources/drill-embedded
rename to distribution/src/main/resources/drill-embedded
diff --git a/distribution/src/resources/drill-embedded.bat b/distribution/src/main/resources/drill-embedded.bat
similarity index 100%
rename from distribution/src/resources/drill-embedded.bat
rename to distribution/src/main/resources/drill-embedded.bat
diff --git a/distribution/src/resources/drill-env.sh b/distribution/src/main/resources/drill-env.sh
similarity index 100%
rename from distribution/src/resources/drill-env.sh
rename to distribution/src/main/resources/drill-env.sh
diff --git a/distribution/src/resources/drill-localhost b/distribution/src/main/resources/drill-localhost
similarity index 100%
rename from distribution/src/resources/drill-localhost
rename to distribution/src/main/resources/drill-localhost
diff --git a/distribution/src/resources/drill-on-yarn-example.conf b/distribution/src/main/resources/drill-on-yarn-example.conf
similarity index 100%
rename from distribution/src/resources/drill-on-yarn-example.conf
rename to distribution/src/main/resources/drill-on-yarn-example.conf
diff --git a/distribution/src/resources/drill-on-yarn.sh b/distribution/src/main/resources/drill-on-yarn.sh
similarity index 100%
rename from distribution/src/resources/drill-on-yarn.sh
rename to distribution/src/main/resources/drill-on-yarn.sh
diff --git a/distribution/src/resources/drill-override-example.conf b/distribution/src/main/resources/drill-override-example.conf
similarity index 88%
rename from distribution/src/resources/drill-override-example.conf
rename to distribution/src/main/resources/drill-override-example.conf
index 5aa45a9..e72396c 100644
--- a/distribution/src/resources/drill-override-example.conf
+++ b/distribution/src/main/resources/drill-override-example.conf
@@ -13,8 +13,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-#  This file tells Drill to consider this module when class path scanning.  
-#  This file can also include any supplementary configuration information.  
+#  This file tells Drill to consider this module when class path scanning.
+#  This file can also include any supplementary configuration information.
 #  This file is in HOCON format, see https://github.com/typesafehub/config/blob/master/HOCON.md for more information.
 
 drill.logical.function.packages += "org.apache.drill.exec.expr.fn.impl"
@@ -41,7 +41,7 @@ drill.exec: {
         threads: 1
       }
     },
-  	use.ip : false
+    use.ip : false
   },
   operator: {
     packages += "org.apache.drill.exec.physical.config"
@@ -64,28 +64,28 @@ drill.exec: {
     action_on_plugins_override_file: "none"
   },
   zk: {
-	connect: "localhost:2181",
-	root: "drill",
-	refresh: 500,
-	timeout: 5000,
-  	retry: {
-  	  count: 7200,
-  	  delay: 500
-  	}
-  	# This option controls whether Drill specifies ACLs when it creates znodes.
-  	# If this is 'false', then anyone has all privileges for all Drill znodes.
-  	# This corresponds to ZOO_OPEN_ACL_UNSAFE.
-  	# Setting this flag to 'true' enables the provider specified in "acl_provider"
-  	apply_secure_acl: false,
+    connect: "localhost:2181",
+    root: "drill",
+    refresh: 500,
+    timeout: 5000,
+    retry: {
+      count: 7200,
+      delay: 500
+    }
+    # This option controls whether Drill specifies ACLs when it creates znodes.
+    # If this is 'false', then anyone has all privileges for all Drill znodes.
+    # This corresponds to ZOO_OPEN_ACL_UNSAFE.
+    # Setting this flag to 'true' enables the provider specified in "acl_provider"
+    apply_secure_acl: false,
 
-  	# This option specified the ACL provider to be used by Drill.
-  	# Custom ACL providers can be provided in the Drillbit classpath and Drill can be made to pick them
-  	# by changing this option.
-  	# Note: This option has no effect if "apply_secure_acl" is 'false'
-  	#
-  	# The default "creator-all" will setup ACLs such that
-  	#    - Only the Drillbit user will have all privileges(create, delete, read, write, admin). Same as ZOO_CREATOR_ALL_ACL
-  	#    - Other users will only be able to read the cluster-discovery(list of Drillbits in the cluster) znodes.
+    # This option specified the ACL provider to be used by Drill.
+    # Custom ACL providers can be provided in the Drillbit classpath and Drill can be made to pick them
+    # by changing this option.
+    # Note: This option has no effect if "apply_secure_acl" is 'false'
+    #
+    # The default "creator-all" will setup ACLs such that
+    #    - Only the Drillbit user will have all privileges(create, delete, read, write, admin). Same as ZOO_CREATOR_ALL_ACL
+    #    - Other users will only be able to read the cluster-discovery(list of Drillbits in the cluster) znodes.
     #
     acl_provider: "creator-all"
   },
diff --git a/distribution/src/resources/drill-override.conf b/distribution/src/main/resources/drill-override.conf
similarity index 97%
rename from distribution/src/resources/drill-override.conf
rename to distribution/src/main/resources/drill-override.conf
index b484ea3..9f07e19 100644
--- a/distribution/src/resources/drill-override.conf
+++ b/distribution/src/main/resources/drill-override.conf
@@ -13,8 +13,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-#  This file tells Drill to consider this module when class path scanning.  
-#  This file can also include any supplementary configuration information.  
+#  This file tells Drill to consider this module when class path scanning.
+#  This file can also include any supplementary configuration information.
 #  This file is in HOCON format, see https://github.com/typesafehub/config/blob/master/HOCON.md for more information.
 
 # See 'drill-override-example.conf' for example configurations
diff --git a/distribution/src/resources/drill-setup.sh b/distribution/src/main/resources/drill-setup.sh
similarity index 100%
rename from distribution/src/resources/drill-setup.sh
rename to distribution/src/main/resources/drill-setup.sh
diff --git a/distribution/src/resources/drill-sqlline-override-example.conf b/distribution/src/main/resources/drill-sqlline-override-example.conf
similarity index 100%
rename from distribution/src/resources/drill-sqlline-override-example.conf
rename to distribution/src/main/resources/drill-sqlline-override-example.conf
diff --git a/distribution/src/resources/drillbit b/distribution/src/main/resources/drillbit
similarity index 100%
rename from distribution/src/resources/drillbit
rename to distribution/src/main/resources/drillbit
diff --git a/distribution/src/resources/drillbit.sh b/distribution/src/main/resources/drillbit.sh
similarity index 100%
rename from distribution/src/resources/drillbit.sh
rename to distribution/src/main/resources/drillbit.sh
diff --git a/distribution/src/resources/dumpcat b/distribution/src/main/resources/dumpcat
similarity index 100%
rename from distribution/src/resources/dumpcat
rename to distribution/src/main/resources/dumpcat
diff --git a/distribution/src/resources/hadoop-excludes.txt b/distribution/src/main/resources/hadoop-excludes.txt
similarity index 100%
rename from distribution/src/resources/hadoop-excludes.txt
rename to distribution/src/main/resources/hadoop-excludes.txt
diff --git a/distribution/src/resources/logback.xml b/distribution/src/main/resources/logback.xml
similarity index 99%
rename from distribution/src/resources/logback.xml
rename to distribution/src/main/resources/logback.xml
index 7d95063..720826e 100644
--- a/distribution/src/resources/logback.xml
+++ b/distribution/src/main/resources/logback.xml
@@ -27,7 +27,7 @@
     <RemoteHosts>${LILITH_HOSTNAME:-localhost}</RemoteHosts>
   </appender>
    -->
-   
+
   <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
     <encoder>
       <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
@@ -50,8 +50,8 @@
         <pattern>%msg%n</pattern>
       </encoder>
     </appender>
-    
-    
+
+
     <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
       <file>${log.path}</file>
       <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
@@ -80,7 +80,7 @@
     <!--     <appender-ref ref="SOCKET" /> -->
   </logger>
 
-  <!-- 
+  <!--
   <logger name="org.apache.drill" additivity="false">
     <level value="debug" />
     <appender-ref ref="SOCKET" />
diff --git a/distribution/src/resources/runbit b/distribution/src/main/resources/runbit
similarity index 100%
rename from distribution/src/resources/runbit
rename to distribution/src/main/resources/runbit
diff --git a/distribution/src/resources/saffron.properties b/distribution/src/main/resources/saffron.properties
similarity index 100%
rename from distribution/src/resources/saffron.properties
rename to distribution/src/main/resources/saffron.properties
diff --git a/distribution/src/resources/sqlline b/distribution/src/main/resources/sqlline
similarity index 100%
rename from distribution/src/resources/sqlline
rename to distribution/src/main/resources/sqlline
diff --git a/distribution/src/resources/sqlline.bat b/distribution/src/main/resources/sqlline.bat
similarity index 96%
rename from distribution/src/resources/sqlline.bat
rename to distribution/src/main/resources/sqlline.bat
index daf5afe..c5ab816 100755
--- a/distribution/src/resources/sqlline.bat
+++ b/distribution/src/main/resources/sqlline.bat
@@ -156,6 +156,7 @@ if "test%HADOOP_HOME%" == "test" (
   set HADOOP_CLASSPATH=%HADOOP_HOME%\conf;!HADOOP_CLASSPATH!
   set USE_HADOOP_CP=1
 )
+set PATH=!HADOOP_HOME!\bin;!PATH!
 
 rem ----
 rem Deal with HBase JARs, if HBASE_HOME was specified
diff --git a/distribution/src/resources/storage-plugins-override-example.conf b/distribution/src/main/resources/storage-plugins-override-example.conf
similarity index 100%
rename from distribution/src/resources/storage-plugins-override-example.conf
rename to distribution/src/main/resources/storage-plugins-override-example.conf
diff --git a/distribution/src/resources/submit_plan b/distribution/src/main/resources/submit_plan
similarity index 100%
rename from distribution/src/resources/submit_plan
rename to distribution/src/main/resources/submit_plan
diff --git a/distribution/src/main/resources/winutils/hadoop.dll b/distribution/src/main/resources/winutils/hadoop.dll
new file mode 100644
index 0000000..e7ffeac
Binary files /dev/null and b/distribution/src/main/resources/winutils/hadoop.dll differ
diff --git a/distribution/src/main/resources/winutils/winutils.exe b/distribution/src/main/resources/winutils/winutils.exe
new file mode 100644
index 0000000..027ca13
Binary files /dev/null and b/distribution/src/main/resources/winutils/winutils.exe differ
diff --git a/distribution/src/resources/yarn-client-log.xml b/distribution/src/main/resources/yarn-client-log.xml
similarity index 99%
rename from distribution/src/resources/yarn-client-log.xml
rename to distribution/src/main/resources/yarn-client-log.xml
index feca2c9..7e64be6 100644
--- a/distribution/src/resources/yarn-client-log.xml
+++ b/distribution/src/main/resources/yarn-client-log.xml
@@ -25,7 +25,7 @@
  See http://logback.qos.ch/manual/index.html for more information.
 -->
 <configuration>
-   
+
   <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
     <encoder>
       <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
diff --git a/distribution/src/resources/yarn-drillbit.sh b/distribution/src/main/resources/yarn-drillbit.sh
similarity index 100%
rename from distribution/src/resources/yarn-drillbit.sh
rename to distribution/src/main/resources/yarn-drillbit.sh
diff --git a/docs/dev/HadoopWinutils.md b/docs/dev/HadoopWinutils.md
new file mode 100644
index 0000000..7b5b2bc
--- /dev/null
+++ b/docs/dev/HadoopWinutils.md
@@ -0,0 +1,11 @@
+## Hadoop Winutils
+
+Hadoop Winutils native libraries are required to run Drill on Windows. The last version present in maven repository is 2.7.1 and is not updated anymore.
+That's why Winutils version matching Hadoop version used in Drill is located in distribution/src/main/resources.
+
+Current Winutils version: *3.2.1.*
+
+## References
+- Official wiki: [Windows Problems](https://cwiki.apache.org/confluence/display/HADOOP2/WindowsProblems).
+- Winutils compiling process is described [here](https://github.com/steveloughran/winutils).
+- Actual versions are being uploaded [here](https://github.com/cdarlint/winutils)
\ No newline at end of file
diff --git a/exec/java-exec/pom.xml b/exec/java-exec/pom.xml
index 176b388..4d4cd59 100644
--- a/exec/java-exec/pom.xml
+++ b/exec/java-exec/pom.xml
@@ -454,20 +454,6 @@
           <groupId>commons-codec</groupId>
           <artifactId>commons-codec</artifactId>
         </exclusion>
-<!---->
-        <!--<exclusion>-->
-          <!--<groupId>com.sun.jersey</groupId>-->
-          <!--<artifactId>jersey-core</artifactId>-->
-        <!--</exclusion>-->
-        <!--<exclusion>-->
-          <!--<groupId>com.sun.jersey</groupId>-->
-          <!--<artifactId>jersey-server</artifactId>-->
-        <!--</exclusion>-->
-        <!--<exclusion>-->
-          <!--<groupId>com.sun.jersey</groupId>-->
-          <!--<artifactId>jersey-json</artifactId>-->
-        <!--</exclusion>-->
-<!---->
       </exclusions>
     </dependency>
     <dependency>
@@ -487,7 +473,6 @@
           <groupId>commons-codec</groupId>
           <artifactId>commons-codec</artifactId>
         </exclusion>
-        <!---->
         <exclusion>
           <groupId>com.sun.jersey</groupId>
           <artifactId>jersey-core</artifactId>
@@ -500,7 +485,6 @@
           <groupId>com.sun.jersey</groupId>
           <artifactId>jersey-json</artifactId>
         </exclusion>
-        <!---->
         <exclusion>
           <groupId>log4j</groupId>
           <artifactId>log4j</artifactId>
@@ -537,6 +521,10 @@
           <groupId>commons-codec</groupId>
           <artifactId>commons-codec</artifactId>
         </exclusion>
+        <exclusion>
+          <groupId>commons-logging</groupId>
+          <artifactId>commons-logging</artifactId>
+        </exclusion>
       </exclusions>
     </dependency>
     <dependency>
diff --git a/exec/java-exec/src/main/java/org/apache/commons/logging/impl/Log4JLogger.java b/exec/java-exec/src/main/java/org/apache/commons/logging/impl/Log4JLogger.java
new file mode 100644
index 0000000..4b41ea6
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/commons/logging/impl/Log4JLogger.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.logging.impl;
+
+/**
+ * A mock class to avoid NoClassDefFoundError after excluding Apache commons-logging from Hadoop dependency.
+ * See <a href="https://issues.apache.org/jira/browse/HADOOP-10288">HADOOP-10288</a> for the problem description.
+ */
+public class Log4JLogger {
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java
index 21d66f0..617d409 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java
@@ -65,7 +65,7 @@ public class LocalSyncableFileSystem extends FileSystem {
 
   @Override
   public FSDataOutputStream create(Path path, FsPermission fsPermission, boolean b, int i, short i2, long l, Progressable progressable) throws IOException {
-    return new FSDataOutputStream(new LocalSyncableOutputStream(path), new Statistics(path.toUri().getScheme()));
+    return new FSDataOutputStream(new LocalSyncableOutputStream(path), FileSystem.getStatistics(path.toUri().getScheme(), getClass()));
   }
 
   @Override
@@ -143,8 +143,7 @@ public class LocalSyncableFileSystem extends FileSystem {
 
     // TODO: remove it after upgrade MapR profile onto hadoop.version 3.1
     public void sync() throws IOException {
-      output.flush();
-      fos.getFD().sync();
+      hflush();
     }
 
     @Override
@@ -155,8 +154,7 @@ public class LocalSyncableFileSystem extends FileSystem {
 
     @Override
     public void hflush() throws IOException {
-      output.flush();
-      fos.getFD().sync();
+      hsync();
     }
 
     @Override
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java
index 97ffb21..d7b6170 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.exec.physical.unit;
 
+import org.apache.commons.lang3.StringUtils;
 import org.apache.drill.shaded.guava.com.google.common.collect.ImmutableList;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.calcite.rel.core.JoinRelType;
@@ -327,7 +328,7 @@ public class TestOutputBatchSize extends PhysicalOpUnitTestBase {
         expr[i * 2] = "lower(" + baselineColumns[i] + ")";
         expr[i * 2 + 1] = baselineColumns[i];
       }
-      baselineValues[i] = (transfer ? testString : testString.toLowerCase());
+      baselineValues[i] = (transfer ? testString : StringUtils.lowerCase(testString));
     }
     jsonRow.append("}");
     StringBuilder batchString = new StringBuilder("[");
@@ -384,7 +385,7 @@ public class TestOutputBatchSize extends PhysicalOpUnitTestBase {
       expr[i * 2] = "lower(" + baselineColumns[i] + ")";
       expr[i * 2 + 1] = baselineColumns[i];
 
-      baselineValues[i] = testString.toLowerCase();
+      baselineValues[i] = StringUtils.lowerCase(testString);
     }
 
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java
index 04e59f6..c736ce0 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java
@@ -43,7 +43,7 @@ public class FileTest {
     FSDataOutputStream out = fs.create(path);
     byte[] s = "hello world".getBytes();
     out.write(s);
-    out.hsync();
+    out.hflush();
     FSDataInputStream in = fs.open(path);
     byte[] bytes = new byte[s.length];
     in.read(bytes);
@@ -60,7 +60,7 @@ public class FileTest {
       bytes = new byte[256*1024];
       Stopwatch watch = Stopwatch.createStarted();
       out.write(bytes);
-      out.hsync();
+      out.hflush();
       long t = watch.elapsed(TimeUnit.MILLISECONDS);
       logger.info(String.format("Elapsed: %d. Rate %d.\n", t, (long) ((long) bytes.length * 1000L / t)));
     }
diff --git a/exec/jdbc-all/pom.xml b/exec/jdbc-all/pom.xml
index d523606..97433fa 100644
--- a/exec/jdbc-all/pom.xml
+++ b/exec/jdbc-all/pom.xml
@@ -531,7 +531,7 @@
                   This is likely due to you adding new dependencies to a java-exec and not updating the excludes in this module. This is important as it minimizes the size of the dependency of Drill application users.
 
                   </message>
-                  <maxsize>42600000</maxsize>
+                  <maxsize>43000000</maxsize>
                   <minsize>15000000</minsize>
                   <files>
                    <file>${project.build.directory}/drill-jdbc-all-${project.version}.jar</file>
diff --git a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java
index eaedf56..072e269 100644
--- a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java
+++ b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java
@@ -33,9 +33,9 @@ public class DrillbitClassLoader extends URLClassLoader {
   private static final URL[] URLS;
 
   static {
-    ArrayList<URL> urlList = new ArrayList<>();
+    List<URL> urlList = new ArrayList<>();
     final String classPath = System.getProperty("app.class.path");
-    final String[] st = fracture(classPath);
+    final String[] st = classPath.split(File.pathSeparator);
     final int l = st.length;
     for (int i = 0; i < l; i++) {
       try {
@@ -47,42 +47,8 @@ public class DrillbitClassLoader extends URLClassLoader {
         assert false : e.toString();
       }
     }
-    urlList.toArray(new URL[urlList.size()]);
 
-    List<URL> urls = new ArrayList<>(urlList);
-    URLS = urls.toArray(new URL[urls.size()]);
-  }
-
-  /**
-   * Helper method to avoid StringTokenizer using.
-   *
-   * Taken from Apache Harmony
-   */
-  private static String[] fracture(String str) {
-    if (str.length() == 0) {
-      return new String[0];
-    }
-    ArrayList<String> res = new ArrayList<>();
-    int in = 0;
-    int curPos = 0;
-    int i = str.indexOf(File.pathSeparator);
-    int len = File.pathSeparator.length();
-    while (i != -1) {
-      String s = str.substring(curPos, i);
-      res.add(s);
-      in++;
-      curPos = i + len;
-      i = str.indexOf(File.pathSeparator, curPos);
-    }
-
-    len = str.length();
-    if (curPos <= len) {
-      String s = str.substring(curPos, len);
-      in++;
-      res.add(s);
-    }
-
-    return res.toArray(new String[in]);
+    URLS = urlList.toArray(new URL[0]);
   }
 
   @Override
diff --git a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
index 19a4be8..3557a4a 100644
--- a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
+++ b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
@@ -105,7 +105,6 @@ public class ITTestShadedJar extends BaseTest {
       super.failed(e, description);
       done();
       runMethod("failed", description);
-      logger.error("Check whether this test was running within 'integration-test' Maven phase");
     }
 
     private void done() {
diff --git a/pom.xml b/pom.xml
index e012ec2..cbc3548 100644
--- a/pom.xml
+++ b/pom.xml
@@ -82,8 +82,8 @@
       Apache Hive 2.3.2. If the version is changed, make sure the jars and their dependencies are updated.
     -->
     <hive.version>2.3.2</hive.version>
-    <hadoop.version>3.0.3</hadoop.version>
-    <hbase.version>2.1.4</hbase.version>
+    <hadoop.version>3.2.1</hadoop.version>
+    <hbase.version>2.2.2</hbase.version>
     <fmpp.version>1.0</fmpp.version>
     <freemarker.version>2.3.28</freemarker.version>
     <javassist.version>3.25.0-GA</javassist.version>
@@ -511,7 +511,7 @@
               <rules>
                 <bannedDependencies>
                   <excludes>
-                    <!--<exclude>commons-logging</exclude>-->
+                    <exclude>commons-logging</exclude>
                     <exclude>javax.servlet:servlet-api</exclude>
                     <exclude>org.mortbay.jetty:servlet-api</exclude>
                     <exclude>org.mortbay.jetty:servlet-api-2.5</exclude>
@@ -1064,22 +1064,22 @@
       <dependency>
           <groupId>org.slf4j</groupId>
           <artifactId>slf4j-api</artifactId>
-          <version>${dep.slf4j.version}</version>
+          <version>${slf4j.version}</version>
       </dependency>
       <dependency>
           <groupId>org.slf4j</groupId>
           <artifactId>jul-to-slf4j</artifactId>
-          <version>${dep.slf4j.version}</version>
+          <version>${slf4j.version}</version>
       </dependency>
       <dependency>
           <groupId>org.slf4j</groupId>
           <artifactId>jcl-over-slf4j</artifactId>
-          <version>${dep.slf4j.version}</version>
+          <version>${slf4j.version}</version>
       </dependency>
       <dependency>
           <groupId>org.slf4j</groupId>
           <artifactId>log4j-over-slf4j</artifactId>
-          <version>${dep.slf4j.version}</version>
+          <version>${slf4j.version}</version>
       </dependency>
       <dependency>
         <groupId>${calcite.groupId}</groupId>
@@ -1913,14 +1913,14 @@
                 <artifactId>mockito-all</artifactId>
                 <groupId>org.mockito</groupId>
               </exclusion>
-              <!--<exclusion>-->
-                <!--<artifactId>commons-logging-api</artifactId>-->
-                <!--<groupId>commons-logging</groupId>-->
-              <!--</exclusion>-->
-              <!--<exclusion>-->
-                <!--<artifactId>commons-logging</artifactId>-->
-                <!--<groupId>commons-logging</groupId>-->
-              <!--</exclusion>-->
+              <exclusion>
+                <artifactId>commons-logging-api</artifactId>
+                <groupId>commons-logging</groupId>
+              </exclusion>
+              <exclusion>
+                <artifactId>commons-logging</artifactId>
+                <groupId>commons-logging</groupId>
+              </exclusion>
               <exclusion>
                 <groupId>com.sun.jersey</groupId>
                 <artifactId>jersey-core</artifactId>
@@ -1963,7 +1963,56 @@
               </exclusion>
             </exclusions>
           </dependency>
-          <!-- Hadoop Test Dependencies -->
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-auth</artifactId>
+            <version>${hadoop.version}</version>
+            <exclusions>
+              <exclusion>
+                <groupId>net.minidev</groupId>
+                <artifactId>json-smart</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>com.nimbusds</groupId>
+                <artifactId>nimbus-jose-jwt</artifactId>
+              </exclusion>
+            </exclusions>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
+            <version>${hadoop.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-mapreduce-client-core</artifactId>
+            <version>${hadoop.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-archives</artifactId>
+            <version>${hadoop.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-yarn-server-resourcemanager</artifactId>
+            <version>${hadoop.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-yarn-registry</artifactId>
+            <version>${hadoop.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-distcp</artifactId>
+            <version>${hadoop.version}</version>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-minicluster</artifactId>
+            <version>${hadoop.version}</version>
+          </dependency>
           <dependency>
             <groupId>org.apache.hadoop</groupId>
             <artifactId>hadoop-common</artifactId>
@@ -2093,6 +2142,9 @@
               </exclusion>
             </exclusions>
           </dependency>
+
+          <!-- Hadoop Test Dependencies -->
+
           <dependency>
             <groupId>org.apache.hadoop</groupId>
             <artifactId>hadoop-hdfs</artifactId>
@@ -2130,7 +2182,7 @@
               </exclusion>
             </exclusions>
           </dependency>
-          <!-- Hadoop Test Dependencies -->
+
           <dependency>
             <groupId>org.apache.hadoop</groupId>
             <artifactId>hadoop-client</artifactId>


[drill] 01/11: DRILL-7393: Revisit Drill tests to ensure that patching is executed before any test run

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit ba601b01563afa520896f4c30044c79219f7bb8a
Author: Anton Gozhiy <an...@gmail.com>
AuthorDate: Thu Nov 28 14:04:22 2019 +0200

    DRILL-7393: Revisit Drill tests to ensure that patching is executed before any test run
    
    - Added BaseTest with patchers and extended all tests from it.
    - Added a test to java-exec module to ensure that all tests there are inherited from BaseTest.
    - Revised exception handling in the patchers, now it's individual for each patching method.
    
    closes #1910
---
 .../org/apache/drill/common/util/GuavaPatcher.java | 233 +++++++++++----------
 .../apache/drill/common/util/ProtobufPatcher.java  | 154 +++++++-------
 .../java/org/apache/drill/common/TestVersion.java  |   3 +-
 .../drill/common/exceptions/TestUserException.java |   3 +-
 .../drill/common/map/TestCaseInsensitiveMap.java   |   3 +-
 .../common/util/function/TestCheckedFunction.java  |   3 +-
 .../test/java/org/apache/drill/test/BaseTest.java  |  42 ++++
 .../Drill2130CommonHamcrestConfigurationTest.java  |   2 +-
 .../test/java/org/apache/drill/test/DrillTest.java |   4 +-
 .../mapr/drill/maprdb/tests/MaprDBTestsSuite.java  |   3 +-
 .../maprdb/tests/json/TestFieldPathHelper.java     |   3 +-
 .../org/apache/drill/hbase/HBaseTestsSuite.java    |  10 +-
 ...l2130StorageHBaseHamcrestConfigurationTest.java |   3 +-
 .../inspectors/SkipFooterRecordsInspectorTest.java |   3 +-
 .../store/hive/schema/TestColumnListCache.java     |   3 +-
 .../store/hive/schema/TestSchemaConversion.java    |   3 +-
 ...30StorageHiveCoreHamcrestConfigurationTest.java |   3 +-
 .../drill/exec/store/kafka/TestKafkaSuit.java      |   3 +-
 .../kafka/decoders/MessageReaderFactoryTest.java   |   3 +-
 .../apache/drill/store/kudu/TestKuduConnect.java   |   3 +-
 .../drill/exec/store/mongo/MongoTestSuit.java      |   3 +-
 .../exec/store/mongo/TestMongoChunkAssignment.java |   3 +-
 .../yarn/appMaster/DrillApplicationMaster.java     |   5 +-
 .../org/apache/drill/yarn/client/DrillOnYarn.java  |   5 +-
 .../org/apache/drill/yarn/client/TestClient.java   |   3 +-
 .../drill/yarn/client/TestCommandLineOptions.java  |   3 +-
 .../org/apache/drill/yarn/core/TestConfig.java     |   3 +-
 .../org/apache/drill/yarn/scripts/TestScripts.java |   3 +-
 .../apache/drill/yarn/zk/TestAmRegistration.java   |   3 +-
 .../org/apache/drill/yarn/zk/TestZkRegistry.java   |   3 +-
 .../java/org/apache/drill/exec/expr/TestPrune.java |  38 ----
 .../org/apache/drill/exec/server/Drillbit.java     |   5 +-
 .../java/org/apache/drill/BaseTestInheritance.java |  54 +++++
 .../java/org/apache/drill/TestImplicitCasting.java |   3 +-
 .../drill/common/scanner/TestClassPathScanner.java |   3 +-
 .../org/apache/drill/exec/TestOpSerialization.java |   3 +-
 .../java/org/apache/drill/exec/TestSSLConfig.java  |   3 +-
 .../ConnectTriesPropertyTestClusterBits.java       |   3 +-
 .../exec/client/DrillSqlLineApplicationTest.java   |   3 +-
 .../drill/exec/compile/TestEvaluationVisitor.java  |   3 +-
 .../drill/exec/coord/zk/TestEphemeralStore.java    |   3 +-
 .../drill/exec/coord/zk/TestEventDispatcher.java   |   3 +-
 .../apache/drill/exec/coord/zk/TestPathUtils.java  |   3 +-
 .../org/apache/drill/exec/coord/zk/TestZKACL.java  |   3 +-
 .../drill/exec/coord/zk/TestZookeeperClient.java   |   3 +-
 .../drill/exec/dotdrill/TestDotDrillUtil.java      |   3 +-
 .../exec/expr/fn/FunctionInitializerTest.java      |   3 +-
 .../drill/exec/expr/fn/impl/TestSqlPatterns.java   |   3 +-
 .../fn/registry/FunctionRegistryHolderTest.java    |   3 +-
 .../physical/impl/common/HashPartitionTest.java    |   3 +-
 .../common/HashTableAllocationTrackerTest.java     |   3 +-
 .../impl/join/TestBatchSizePredictorImpl.java      |   3 +-
 .../impl/join/TestBuildSidePartitioningImpl.java   |   3 +-
 .../join/TestHashJoinHelperSizeCalculatorImpl.java |   3 +-
 .../impl/join/TestHashJoinMemoryCalculator.java    |   3 +-
 ...estHashTableSizeCalculatorConservativeImpl.java |   3 +-
 .../join/TestHashTableSizeCalculatorLeanImpl.java  |   3 +-
 .../exec/physical/impl/join/TestPartitionStat.java |   5 +-
 .../impl/join/TestPostBuildCalculationsImpl.java   |   3 +-
 .../scan/project/projSet/TestProjectionSet.java    |   3 +-
 .../impl/svremover/AbstractGenericCopierTest.java  |   3 +-
 .../resultSet/project/TestProjectedTuple.java      |   3 +-
 .../resultSet/project/TestProjectionType.java      |   3 +-
 .../common/TestNumericEquiDepthHistogram.java      |   3 +-
 .../TestHardAffinityFragmentParallelizer.java      |   3 +-
 .../drill/exec/planner/logical/DrillOptiqTest.java |   3 +-
 .../exec/planner/logical/FilterSplitTest.java      |   3 +-
 .../drill/exec/record/TestMaterializedField.java   |   3 +-
 .../record/metadata/schema/TestSchemaProvider.java |   3 +-
 .../exec/resourcemgr/TestResourcePoolTree.java     |   3 +-
 .../TestBestFitSelectionPolicy.java                |   3 +-
 .../TestDefaultSelectionPolicy.java                |   3 +-
 .../config/selectors/TestAclSelector.java          |   3 +-
 .../config/selectors/TestComplexSelectors.java     |   3 +-
 .../config/selectors/TestNotEqualSelector.java     |   3 +-
 .../selectors/TestResourcePoolSelectors.java       |   3 +-
 .../config/selectors/TestTagSelector.java          |   3 +-
 .../rpc/control/ConnectionManagerRegistryTest.java |   3 +-
 .../control/TestLocalControlConnectionManager.java |   3 +-
 .../apache/drill/exec/server/TestFailureUtils.java |   3 +-
 .../drill/exec/server/options/OptionValueTest.java |   3 +-
 .../server/options/PersistedOptionValueTest.java   |   3 +-
 .../exec/server/options/TestConfigLinkage.java     |   3 +-
 .../exec/server/rest/StatusResourcesTest.java      |   3 +-
 .../exec/server/rest/TestMainLoginPageModel.java   |   3 +-
 .../exec/server/rest/WebSessionResourcesTest.java  |   3 +-
 .../rest/spnego/TestDrillSpnegoAuthenticator.java  |   3 +-
 .../rest/spnego/TestSpnegoAuthentication.java      |   3 +-
 .../exec/server/rest/spnego/TestSpnegoConfig.java  |   3 +-
 .../drill/exec/sql/TestSqlBracketlessSyntax.java   |   3 +-
 .../drill/exec/store/StorageStrategyTest.java      |   3 +-
 .../exec/store/bson/TestBsonRecordReader.java      |   3 +-
 .../drill/exec/store/dfs/TestDrillFileSystem.java  |   3 +-
 .../store/dfs/TestFormatPluginOptionExtractor.java |   3 +-
 .../store/parquet/TestComplexColumnInSchema.java   |   3 +-
 .../store/parquet/TestParquetMetadataVersion.java  |   3 +-
 .../store/parquet/TestParquetReaderConfig.java     |   3 +-
 .../store/parquet/TestParquetReaderUtility.java    |   3 +-
 .../drill/exec/store/store/TestAssignment.java     |   3 +-
 ...Drill2130JavaExecHamcrestConfigurationTest.java |   3 +-
 .../drill/exec/util/DrillExceptionUtilTest.java    |   3 +-
 .../drill/exec/util/FileSystemUtilTestBase.java    |   3 +-
 .../exec/util/TestApproximateStringMatcher.java    |   3 +-
 .../drill/exec/util/TestArrayWrappedIntIntMap.java |   3 +-
 .../exec/util/TestValueVectorElementFormatter.java |   3 +-
 .../drill/exec/vector/TestSplitAndTransfer.java    |   3 +-
 .../exec/vector/accessor/GenericAccessorTest.java  |   3 +-
 .../exec/vector/accessor/TestTimePrintMillis.java  |   3 +-
 .../complex/writer/TestPromotableWriter.java       |   3 +-
 .../exec/vector/complex/writer/TestRepeated.java   |   3 +-
 .../drill/exec/work/filter/BloomFilterTest.java    |   3 +-
 .../work/fragment/FragmentStatusReporterTest.java  |   3 +-
 .../java/org/apache/drill/test/ExampleTest.java    |   2 +-
 .../test/rowSet/test/TestRowSetComparison.java     |   3 +-
 .../org/apache/drill/jdbc/ITTestShadedJar.java     |   3 +-
 .../jdbc/ConnectionTransactionMethodsTest.java     |   3 +-
 .../apache/drill/jdbc/DatabaseMetaDataTest.java    |   3 +-
 .../drill/jdbc/DrillColumnMetaDataListTest.java    |   3 +-
 .../jdbc/impl/TypeConvertingSqlAccessorTest.java   |   3 +-
 ...Drill2130JavaJdbcHamcrestConfigurationTest.java |   3 +-
 .../Drill2288GetColumnsMetadataWhenNoRowsTest.java |   3 +-
 .../drill/exec/memory/BoundsCheckingTest.java      |   4 +-
 .../apache/drill/exec/memory/TestAccountant.java   |   3 +-
 .../drill/exec/memory/TestBaseAllocator.java       |   3 +-
 .../apache/drill/exec/memory/TestEndianess.java    |   3 +-
 .../record/metadata/TestMetadataProperties.java    |   3 +-
 .../schema/parser/TestParserErrorHandling.java     |   3 +-
 .../metadata/schema/parser/TestSchemaParser.java   |   3 +-
 .../exec/vector/VariableLengthVectorTest.java      |   4 +-
 .../org/apache/drill/exec/vector/VectorTest.java   |   3 +-
 .../drill/common/expression/SchemaPathTest.java    |   3 +-
 .../expression/fn/JodaDateValidatorTest.java       |   3 +-
 .../drill/common/logical/data/OrderTest.java       |   3 +-
 .../drill/metastore/iceberg/IcebergBaseTest.java   |  11 +-
 .../components/tables/TestBasicTablesRequests.java |   3 +-
 .../tables/TestBasicTablesTransformer.java         |   3 +-
 .../components/tables/TestMetastoreTableInfo.java  |   3 +-
 .../tables/TestTableMetadataUnitConversion.java    |   3 +-
 .../metastore/metadata/MetadataSerDeTest.java      |   3 +-
 pom.xml                                            |  19 --
 140 files changed, 556 insertions(+), 410 deletions(-)

diff --git a/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java b/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java
index 921d56d..8eed12d 100644
--- a/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java
+++ b/common/src/main/java/org/apache/drill/common/util/GuavaPatcher.java
@@ -24,154 +24,157 @@ import java.util.List;
 import java.util.stream.Collectors;
 import java.util.stream.IntStream;
 
-import javassist.CannotCompileException;
 import javassist.ClassPool;
 import javassist.CtClass;
 import javassist.CtConstructor;
 import javassist.CtMethod;
 import javassist.CtNewMethod;
-import javassist.NotFoundException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 public class GuavaPatcher {
   private static final Logger logger = LoggerFactory.getLogger(GuavaPatcher.class);
 
-  private static boolean patched;
+  private static boolean patchingAttempted;
 
   public static synchronized void patch() {
-    if (!patched) {
-      try {
-        patchStopwatch();
-        patchCloseables();
-        patchPreconditions();
-        patched = true;
-      } catch (Throwable e) {
-        logger.warn("Unable to patch Guava classes.", e);
-      }
+    if (!patchingAttempted) {
+      patchingAttempted = true;
+      patchStopwatch();
+      patchCloseables();
+      patchPreconditions();
     }
   }
 
   /**
    * Makes Guava stopwatch look like the old version for compatibility with hbase-server (for test purposes).
    */
-  private static void patchStopwatch() throws Exception {
-
-    ClassPool cp = ClassPool.getDefault();
-    CtClass cc = cp.get("com.google.common.base.Stopwatch");
-
-    // Expose the constructor for Stopwatch for old libraries who use the pattern new Stopwatch().start().
-    for (CtConstructor c : cc.getConstructors()) {
-      if (!Modifier.isStatic(c.getModifiers())) {
-        c.setModifiers(Modifier.PUBLIC);
+  private static void patchStopwatch() {
+    try {
+      ClassPool cp = ClassPool.getDefault();
+      CtClass cc = cp.get("com.google.common.base.Stopwatch");
+
+      // Expose the constructor for Stopwatch for old libraries who use the pattern new Stopwatch().start().
+      for (CtConstructor c : cc.getConstructors()) {
+        if (!Modifier.isStatic(c.getModifiers())) {
+          c.setModifiers(Modifier.PUBLIC);
+        }
       }
-    }
 
-    // Add back the Stopwatch.elapsedMillis() method for old consumers.
-    CtMethod newMethod = CtNewMethod.make(
-        "public long elapsedMillis() { return elapsed(java.util.concurrent.TimeUnit.MILLISECONDS); }", cc);
-    cc.addMethod(newMethod);
+      // Add back the Stopwatch.elapsedMillis() method for old consumers.
+      CtMethod newMethod = CtNewMethod.make(
+          "public long elapsedMillis() { return elapsed(java.util.concurrent.TimeUnit.MILLISECONDS); }", cc);
+      cc.addMethod(newMethod);
 
-    // Load the modified class instead of the original.
-    cc.toClass();
+      // Load the modified class instead of the original.
+      cc.toClass();
 
-    logger.info("Google's Stopwatch patched for old HBase Guava version.");
+      logger.info("Google's Stopwatch patched for old HBase Guava version.");
+    } catch (Exception e) {
+      logger.warn("Unable to patch Guava classes.", e);
+    }
   }
 
-  private static void patchCloseables() throws Exception {
-
-    ClassPool cp = ClassPool.getDefault();
-    CtClass cc = cp.get("com.google.common.io.Closeables");
+  private static void patchCloseables() {
+    try {
+      ClassPool cp = ClassPool.getDefault();
+      CtClass cc = cp.get("com.google.common.io.Closeables");
 
+      // Add back the Closeables.closeQuietly() method for old consumers.
+      CtMethod newMethod = CtNewMethod.make(
+          "public static void closeQuietly(java.io.Closeable closeable) { try{closeable.close();}catch(Exception e){} }",
+          cc);
+      cc.addMethod(newMethod);
 
-    // Add back the Closeables.closeQuietly() method for old consumers.
-    CtMethod newMethod = CtNewMethod.make(
-        "public static void closeQuietly(java.io.Closeable closeable) { try{closeable.close();}catch(Exception e){} }",
-        cc);
-    cc.addMethod(newMethod);
-
-    // Load the modified class instead of the original.
-    cc.toClass();
+      // Load the modified class instead of the original.
+      cc.toClass();
 
-    logger.info("Google's Closeables patched for old HBase Guava version.");
+      logger.info("Google's Closeables patched for old HBase Guava version.");
+    } catch (Exception e) {
+      logger.warn("Unable to patch Guava classes.", e);
+    }
   }
 
   /**
    * Patches Guava Preconditions with missing methods, added for the Apache Iceberg.
    */
-  private static void patchPreconditions() throws NotFoundException, CannotCompileException {
-    ClassPool cp = ClassPool.getDefault();
-    CtClass cc = cp.get("com.google.common.base.Preconditions");
-
-    // Javassist does not support varargs, generate methods with varying number of arguments
-    int startIndex = 1;
-    int endIndex = 5;
-
-    List<String> methodsWithVarargsTemplates = Arrays.asList(
-      "public static void checkArgument(boolean expression, String errorMessageTemplate, %s) {\n"
-        + "    if (!expression) {\n"
-        + "      throw new IllegalArgumentException(format(errorMessageTemplate, new Object[] { %s }));\n"
-        + "    }\n"
-        + "  }",
-
-      "public static Object checkNotNull(Object reference, String errorMessageTemplate, %s) {\n"
-        + "    if (reference == null) {\n"
-        + "      throw new NullPointerException(format(errorMessageTemplate, new Object[] { %s }));\n"
-        + "    } else {\n"
-        + "      return reference;\n"
-        + "    }\n"
-        + "  }",
-
-      "public static void checkState(boolean expression, String errorMessageTemplate, %s) {\n"
-        + "    if (!expression) {\n"
-        + "      throw new IllegalStateException(format(errorMessageTemplate, new Object[] { %s }));\n"
-        + "    }\n"
-        + "  }"
-    );
-
-    List<String> methodsWithPrimitives = Arrays.asList(
-      "public static void checkArgument(boolean expression, String errorMessageTemplate, int arg1) {\n"
-        + "    if (!expression) {\n"
-        + "      throw new IllegalArgumentException(format(errorMessageTemplate, new Object[] { new Integer(arg1) }));\n"
-        + "    }\n"
-        + "  }",
-      "public static Object checkNotNull(Object reference, String errorMessageTemplate, int arg1) {\n"
-        + "    if (reference == null) {\n"
-        + "      throw new NullPointerException(format(errorMessageTemplate, new Object[] { new Integer(arg1) }));\n"
-        + "    } else {\n"
-        + "      return reference;\n"
-        + "    }\n"
-        + "  }"
-    );
-
-    List<String> newMethods = IntStream.rangeClosed(startIndex, endIndex)
-      .mapToObj(
-        i -> {
-          List<String> args = IntStream.rangeClosed(startIndex, i)
-            .mapToObj(j -> "arg" + j)
-            .collect(Collectors.toList());
-
-          String methodInput = args.stream()
-            .map(arg -> "Object " + arg)
-            .collect(Collectors.joining(", "));
-
-          String arrayInput = String.join(", ", args);
-
-          return methodsWithVarargsTemplates.stream()
-            .map(method -> String.format(method, methodInput, arrayInput))
-            .collect(Collectors.toList());
-        })
-      .flatMap(Collection::stream)
-      .collect(Collectors.toList());
-
-    newMethods.addAll(methodsWithPrimitives);
-
-    for (String method : newMethods) {
-      CtMethod newMethod = CtNewMethod.make(method, cc);
-      cc.addMethod(newMethod);
-    }
+  private static void patchPreconditions() {
+    try {
+      ClassPool cp = ClassPool.getDefault();
+      CtClass cc = cp.get("com.google.common.base.Preconditions");
+
+      // Javassist does not support varargs, generate methods with varying number of arguments
+      int startIndex = 1;
+      int endIndex = 5;
+
+      List<String> methodsWithVarargsTemplates = Arrays.asList(
+          "public static void checkArgument(boolean expression, String errorMessageTemplate, %s) {\n"
+              + "    if (!expression) {\n"
+              + "      throw new IllegalArgumentException(format(errorMessageTemplate, new Object[] { %s }));\n"
+              + "    }\n"
+              + "  }",
+
+          "public static Object checkNotNull(Object reference, String errorMessageTemplate, %s) {\n"
+              + "    if (reference == null) {\n"
+              + "      throw new NullPointerException(format(errorMessageTemplate, new Object[] { %s }));\n"
+              + "    } else {\n"
+              + "      return reference;\n"
+              + "    }\n"
+              + "  }",
+
+          "public static void checkState(boolean expression, String errorMessageTemplate, %s) {\n"
+              + "    if (!expression) {\n"
+              + "      throw new IllegalStateException(format(errorMessageTemplate, new Object[] { %s }));\n"
+              + "    }\n"
+              + "  }"
+      );
+
+      List<String> methodsWithPrimitives = Arrays.asList(
+          "public static void checkArgument(boolean expression, String errorMessageTemplate, int arg1) {\n"
+              + "    if (!expression) {\n"
+              + "      throw new IllegalArgumentException(format(errorMessageTemplate, new Object[] { new Integer(arg1) }));\n"
+              + "    }\n"
+              + "  }",
+          "public static Object checkNotNull(Object reference, String errorMessageTemplate, int arg1) {\n"
+              + "    if (reference == null) {\n"
+              + "      throw new NullPointerException(format(errorMessageTemplate, new Object[] { new Integer(arg1) }));\n"
+              + "    } else {\n"
+              + "      return reference;\n"
+              + "    }\n"
+              + "  }"
+      );
+
+      List<String> newMethods = IntStream.rangeClosed(startIndex, endIndex)
+          .mapToObj(
+              i -> {
+                List<String> args = IntStream.rangeClosed(startIndex, i)
+                    .mapToObj(j -> "arg" + j)
+                    .collect(Collectors.toList());
+
+                String methodInput = args.stream()
+                    .map(arg -> "Object " + arg)
+                    .collect(Collectors.joining(", "));
+
+                String arrayInput = String.join(", ", args);
+
+                return methodsWithVarargsTemplates.stream()
+                    .map(method -> String.format(method, methodInput, arrayInput))
+                    .collect(Collectors.toList());
+              })
+          .flatMap(Collection::stream)
+          .collect(Collectors.toList());
+
+      newMethods.addAll(methodsWithPrimitives);
+
+      for (String method : newMethods) {
+        CtMethod newMethod = CtNewMethod.make(method, cc);
+        cc.addMethod(newMethod);
+      }
 
-    cc.toClass();
-    logger.info("Google's Preconditions were patched to hold new methods.");
+      cc.toClass();
+      logger.info("Google's Preconditions were patched to hold new methods.");
+    } catch (Exception e) {
+      logger.warn("Unable to patch Guava classes.", e);
+    }
   }
 }
diff --git a/common/src/main/java/org/apache/drill/common/util/ProtobufPatcher.java b/common/src/main/java/org/apache/drill/common/util/ProtobufPatcher.java
index b0c11a5..3556eec 100644
--- a/common/src/main/java/org/apache/drill/common/util/ProtobufPatcher.java
+++ b/common/src/main/java/org/apache/drill/common/util/ProtobufPatcher.java
@@ -25,7 +25,6 @@ import javassist.CtMethod;
 import javassist.CtNewConstructor;
 import javassist.CtNewMethod;
 import javassist.Modifier;
-import javassist.NotFoundException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -40,14 +39,10 @@ public class ProtobufPatcher {
    */
   public static synchronized void patch() {
     if (!patchingAttempted) {
-      try {
-        patchingAttempted = true;
-        patchByteString();
-        patchGeneratedMessageLite();
-        patchGeneratedMessageLiteBuilder();
-      } catch (Exception e) {
-        logger.warn("Unable to patch Protobuf.", e);
-      }
+      patchingAttempted = true;
+      patchByteString();
+      patchGeneratedMessageLite();
+      patchGeneratedMessageLiteBuilder();
     }
   }
 
@@ -56,73 +51,75 @@ public class ProtobufPatcher {
    * that were made final in version 3.6+ of protobuf.
    * This method removes the final modifiers. It also creates and loads classes
    * that were made private nested in protobuf 3.6+ to be accessible by the old fully qualified name.
-   *
-   * @throws NotFoundException if unable to find a method or class to patch.
-   * @throws CannotCompileException if unable to compile the patched class.
    */
-  private static void patchByteString() throws NotFoundException, CannotCompileException {
-    ClassPool classPool = ClassPool.getDefault();
-    CtClass byteString = classPool.get("com.google.protobuf.ByteString");
-    removeFinal(byteString.getDeclaredMethod("toString"));
-    removeFinal(byteString.getDeclaredMethod("hashCode"));
-    removeFinal(byteString.getDeclaredMethod("iterator"));
-
-    // Need to inherit from these classes to make them accessible by the old path.
-    CtClass googleLiteralByteString = classPool.get("com.google.protobuf.ByteString$LiteralByteString");
-    removePrivate(googleLiteralByteString);
-    CtClass googleBoundedByteString = classPool.get("com.google.protobuf.ByteString$BoundedByteString");
-    removePrivate(googleBoundedByteString);
-    removeFinal(googleBoundedByteString);
-    for (CtMethod ctMethod : googleLiteralByteString.getDeclaredMethods()) {
-      removeFinal(ctMethod);
+  private static void patchByteString() {
+    try {
+      ClassPool classPool = ClassPool.getDefault();
+      CtClass byteString = classPool.get("com.google.protobuf.ByteString");
+      removeFinal(byteString.getDeclaredMethod("toString"));
+      removeFinal(byteString.getDeclaredMethod("hashCode"));
+      removeFinal(byteString.getDeclaredMethod("iterator"));
+
+      // Need to inherit from these classes to make them accessible by the old path.
+      CtClass googleLiteralByteString = classPool.get("com.google.protobuf.ByteString$LiteralByteString");
+      removePrivate(googleLiteralByteString);
+      CtClass googleBoundedByteString = classPool.get("com.google.protobuf.ByteString$BoundedByteString");
+      removePrivate(googleBoundedByteString);
+      removeFinal(googleBoundedByteString);
+      for (CtMethod ctMethod : googleLiteralByteString.getDeclaredMethods()) {
+        removeFinal(ctMethod);
+      }
+      byteString.toClass();
+      googleLiteralByteString.toClass();
+      googleBoundedByteString.toClass();
+
+      // Adding the classes back to the old path.
+      CtClass literalByteString = classPool.makeClass("com.google.protobuf.LiteralByteString");
+      literalByteString.setSuperclass(googleLiteralByteString);
+      literalByteString.toClass();
+      CtClass boundedByteString = classPool.makeClass("com.google.protobuf.BoundedByteString");
+      boundedByteString.setSuperclass(googleBoundedByteString);
+      boundedByteString.toClass();
+    } catch (Exception e) {
+      logger.warn("Unable to patch Protobuf.", e);
     }
-    byteString.toClass();
-    googleLiteralByteString.toClass();
-    googleBoundedByteString.toClass();
-
-    // Adding the classes back to the old path.
-    CtClass literalByteString = classPool.makeClass("com.google.protobuf.LiteralByteString");
-    literalByteString.setSuperclass(googleLiteralByteString);
-    literalByteString.toClass();
-    CtClass boundedByteString = classPool.makeClass("com.google.protobuf.BoundedByteString");
-    boundedByteString.setSuperclass(googleBoundedByteString);
-    boundedByteString.toClass();
   }
 
   /**
    * MapR-DB client extends {@link com.google.protobuf.GeneratedMessageLite} and overrides some methods,
    * that were made final in version 3.6+ of protobuf.
    * This method removes the final modifiers.
-   *
-   * @throws NotFoundException if unable to find a method or class to patch.
-   * @throws CannotCompileException if unable to compile the patched method body.
    */
-  private static void patchGeneratedMessageLite() throws NotFoundException, CannotCompileException {
-    ClassPool classPool = ClassPool.getDefault();
-    CtClass generatedMessageLite = classPool.get("com.google.protobuf.GeneratedMessageLite");
-    removeFinal(generatedMessageLite.getDeclaredMethod("getParserForType"));
-    removeFinal(generatedMessageLite.getDeclaredMethod("isInitialized"));
-
-    // The method was removed, but it is used in com.mapr.fs.proto.Dbserver.
-    // Adding it back.
-    generatedMessageLite.addMethod(CtNewMethod.make("protected void makeExtensionsImmutable() { }", generatedMessageLite));
-
-    // A constructor with this signature was removed. Adding it back.
-    generatedMessageLite.addConstructor(CtNewConstructor.make("protected GeneratedMessageLite(com.google.protobuf.GeneratedMessageLite.Builder builder) { }", generatedMessageLite));
-
-    // This single method was added instead of several abstract methods.
-    // MapR-DB client doesn't use it, but it was added in overridden equals() method.
-    // Adding default implementation.
-    CtMethod dynamicMethod = generatedMessageLite.getDeclaredMethod("dynamicMethod", new CtClass[] {
-        classPool.get("com.google.protobuf.GeneratedMessageLite$MethodToInvoke"),
-        classPool.get("java.lang.Object"),
-        classPool.get("java.lang.Object")});
-    addImplementation(dynamicMethod, "if ($1.equals(com.google.protobuf.GeneratedMessageLite.MethodToInvoke.GET_DEFAULT_INSTANCE)) {" +
-                                  "  return this;" +
-                                  "} else {" +
-                                  "  return null;" +
-                                  "}");
-    generatedMessageLite.toClass();
+  private static void patchGeneratedMessageLite() {
+    try {
+      ClassPool classPool = ClassPool.getDefault();
+      CtClass generatedMessageLite = classPool.get("com.google.protobuf.GeneratedMessageLite");
+      removeFinal(generatedMessageLite.getDeclaredMethod("getParserForType"));
+      removeFinal(generatedMessageLite.getDeclaredMethod("isInitialized"));
+
+      // The method was removed, but it is used in com.mapr.fs.proto.Dbserver.
+      // Adding it back.
+      generatedMessageLite.addMethod(CtNewMethod.make("protected void makeExtensionsImmutable() { }", generatedMessageLite));
+
+      // A constructor with this signature was removed. Adding it back.
+      generatedMessageLite.addConstructor(CtNewConstructor.make("protected GeneratedMessageLite(com.google.protobuf.GeneratedMessageLite.Builder builder) { }", generatedMessageLite));
+
+      // This single method was added instead of several abstract methods.
+      // MapR-DB client doesn't use it, but it was added in overridden equals() method.
+      // Adding default implementation.
+      CtMethod dynamicMethod = generatedMessageLite.getDeclaredMethod("dynamicMethod", new CtClass[]{
+          classPool.get("com.google.protobuf.GeneratedMessageLite$MethodToInvoke"),
+          classPool.get("java.lang.Object"),
+          classPool.get("java.lang.Object")});
+      addImplementation(dynamicMethod, "if ($1.equals(com.google.protobuf.GeneratedMessageLite.MethodToInvoke.GET_DEFAULT_INSTANCE)) {" +
+          "  return this;" +
+          "} else {" +
+          "  return null;" +
+          "}");
+      generatedMessageLite.toClass();
+    } catch (Exception e) {
+      logger.warn("Unable to patch Protobuf.", e);
+    }
   }
 
   /**
@@ -130,17 +127,18 @@ public class ProtobufPatcher {
    * that were made final in version 3.6+ of protobuf.
    * This method removes the final modifiers.
    * Also, adding back a default constructor that was removed.
-   *
-   * @throws NotFoundException if unable to find a method or class to patch.
-   * @throws CannotCompileException if unable to add a default constructor.
    */
-  private static void patchGeneratedMessageLiteBuilder() throws NotFoundException, CannotCompileException {
-    ClassPool classPool = ClassPool.getDefault();
-    CtClass builder = classPool.get("com.google.protobuf.GeneratedMessageLite$Builder");
-    removeFinal(builder.getDeclaredMethod("isInitialized"));
-    removeFinal(builder.getDeclaredMethod("clear"));
-    builder.addConstructor(CtNewConstructor.defaultConstructor(builder));
-    builder.toClass();
+  private static void patchGeneratedMessageLiteBuilder() {
+    try {
+      ClassPool classPool = ClassPool.getDefault();
+      CtClass builder = classPool.get("com.google.protobuf.GeneratedMessageLite$Builder");
+      removeFinal(builder.getDeclaredMethod("isInitialized"));
+      removeFinal(builder.getDeclaredMethod("clear"));
+      builder.addConstructor(CtNewConstructor.defaultConstructor(builder));
+      builder.toClass();
+    } catch (Exception e) {
+      logger.warn("Unable to patch Protobuf.", e);
+    }
   }
 
   /**
@@ -176,7 +174,7 @@ public class ProtobufPatcher {
   /**
    * Removes abstract modifier and adds implementation to a given method.
    *
-   * @param ctMethod method to process.
+   * @param ctMethod   method to process.
    * @param methodBody method implementation.
    * @throws CannotCompileException if unable to compile given method body.
    */
diff --git a/common/src/test/java/org/apache/drill/common/TestVersion.java b/common/src/test/java/org/apache/drill/common/TestVersion.java
index cabacb3..c11dbaa 100644
--- a/common/src/test/java/org/apache/drill/common/TestVersion.java
+++ b/common/src/test/java/org/apache/drill/common/TestVersion.java
@@ -21,13 +21,14 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 /**
  * Test class for {@code Version}
  *
  */
-public class TestVersion {
+public class TestVersion extends BaseTest {
 
   @Test
   public void testSnapshotVersion() {
diff --git a/common/src/test/java/org/apache/drill/common/exceptions/TestUserException.java b/common/src/test/java/org/apache/drill/common/exceptions/TestUserException.java
index 42af634..5e61bbc 100644
--- a/common/src/test/java/org/apache/drill/common/exceptions/TestUserException.java
+++ b/common/src/test/java/org/apache/drill/common/exceptions/TestUserException.java
@@ -19,13 +19,14 @@ package org.apache.drill.common.exceptions;
 
 import org.apache.drill.exec.proto.UserBitShared.DrillPBError;
 import org.apache.drill.exec.proto.UserBitShared.DrillPBError.ErrorType;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
 /**
  * Test various use cases when creating user exceptions
  */
-public class TestUserException {
+public class TestUserException extends BaseTest {
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory
       .getLogger("--ignore.as.this.is.for.testing.exceptions--");
 
diff --git a/common/src/test/java/org/apache/drill/common/map/TestCaseInsensitiveMap.java b/common/src/test/java/org/apache/drill/common/map/TestCaseInsensitiveMap.java
index 19b2b93..2d9e04b 100644
--- a/common/src/test/java/org/apache/drill/common/map/TestCaseInsensitiveMap.java
+++ b/common/src/test/java/org/apache/drill/common/map/TestCaseInsensitiveMap.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.common.map;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import java.util.HashMap;
@@ -28,7 +29,7 @@ import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
-public class TestCaseInsensitiveMap {
+public class TestCaseInsensitiveMap extends BaseTest {
 
   @Test
   public void putAndGet() {
diff --git a/common/src/test/java/org/apache/drill/common/util/function/TestCheckedFunction.java b/common/src/test/java/org/apache/drill/common/util/function/TestCheckedFunction.java
index a1ab389..0ded4ce 100644
--- a/common/src/test/java/org/apache/drill/common/util/function/TestCheckedFunction.java
+++ b/common/src/test/java/org/apache/drill/common/util/function/TestCheckedFunction.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.common.util.function;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.ExpectedException;
@@ -24,7 +25,7 @@ import org.junit.rules.ExpectedException;
 import java.util.HashMap;
 import java.util.Map;
 
-public class TestCheckedFunction {
+public class TestCheckedFunction extends BaseTest {
 
   @Rule
   public ExpectedException thrown = ExpectedException.none();
diff --git a/common/src/test/java/org/apache/drill/test/BaseTest.java b/common/src/test/java/org/apache/drill/test/BaseTest.java
new file mode 100644
index 0000000..adf0d5e
--- /dev/null
+++ b/common/src/test/java/org/apache/drill/test/BaseTest.java
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.test;
+
+import org.apache.drill.common.util.GuavaPatcher;
+import org.apache.drill.common.util.ProtobufPatcher;
+
+/**
+ * Contains patchers that must be executed at the very beginning of test runs.
+ * All Drill test classes should be inherited from it to avoid exceptions (e.g. NoSuchMethodError etc.).
+ */
+public class BaseTest {
+
+  static {
+    /*
+     * HBase and MapR-DB clients use older version of protobuf,
+     * and override some methods that became final in recent versions.
+     * This code removes these final modifiers.
+     */
+    ProtobufPatcher.patch();
+    /*
+     * Some libraries, such as Hadoop or HBase, depend on incompatible versions of Guava.
+     * This code adds back some methods to so that the libraries can work with single Guava version.
+     */
+    GuavaPatcher.patch();
+  }
+}
diff --git a/common/src/test/java/org/apache/drill/test/Drill2130CommonHamcrestConfigurationTest.java b/common/src/test/java/org/apache/drill/test/Drill2130CommonHamcrestConfigurationTest.java
index a747966..ba5f567 100644
--- a/common/src/test/java/org/apache/drill/test/Drill2130CommonHamcrestConfigurationTest.java
+++ b/common/src/test/java/org/apache/drill/test/Drill2130CommonHamcrestConfigurationTest.java
@@ -23,7 +23,7 @@ import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 import static org.hamcrest.CoreMatchers.equalTo;
 
-public class Drill2130CommonHamcrestConfigurationTest {
+public class Drill2130CommonHamcrestConfigurationTest extends BaseTest {
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(Drill2130CommonHamcrestConfigurationTest.class);
 
   @SuppressWarnings("unused")
diff --git a/common/src/test/java/org/apache/drill/test/DrillTest.java b/common/src/test/java/org/apache/drill/test/DrillTest.java
index 54506e3..43015a2 100644
--- a/common/src/test/java/org/apache/drill/test/DrillTest.java
+++ b/common/src/test/java/org/apache/drill/test/DrillTest.java
@@ -23,7 +23,6 @@ import java.lang.management.MemoryMXBean;
 import java.util.List;
 
 import org.apache.drill.common.util.DrillStringUtils;
-import org.apache.drill.common.util.GuavaPatcher;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Rule;
@@ -37,12 +36,11 @@ import org.slf4j.Logger;
 import com.fasterxml.jackson.core.JsonProcessingException;
 import com.fasterxml.jackson.databind.ObjectMapper;
 
-public class DrillTest {
+public class DrillTest extends BaseTest {
 
   protected static final ObjectMapper objectMapper;
 
   static {
-    GuavaPatcher.patch();
     System.setProperty("line.separator", "\n");
     objectMapper = new ObjectMapper();
   }
diff --git a/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/MaprDBTestsSuite.java b/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/MaprDBTestsSuite.java
index edd5ab4..642a989 100644
--- a/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/MaprDBTestsSuite.java
+++ b/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/MaprDBTestsSuite.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.server.DrillbitContext;
 import org.apache.drill.exec.store.StoragePluginRegistry;
 import org.apache.drill.exec.store.dfs.FileSystemConfig;
 import org.apache.drill.hbase.HBaseTestsSuite;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
@@ -46,7 +47,7 @@ import com.mapr.drill.maprdb.tests.json.TestSimpleJson;
   TestSimpleJson.class,
   TestScanRanges.class
 })
-public class MaprDBTestsSuite {
+public class MaprDBTestsSuite extends BaseTest {
   public static final int INDEX_FLUSH_TIMEOUT = 60000;
 
   private static final boolean IS_DEBUG = ManagementFactory.getRuntimeMXBean().getInputArguments().toString().indexOf("-agentlib:jdwp") > 0;
diff --git a/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/json/TestFieldPathHelper.java b/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/json/TestFieldPathHelper.java
index 914ad96..1287297 100644
--- a/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/json/TestFieldPathHelper.java
+++ b/contrib/format-maprdb/src/test/java/com/mapr/drill/maprdb/tests/json/TestFieldPathHelper.java
@@ -21,10 +21,11 @@ import static org.junit.Assert.assertEquals;
 
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.exec.store.mapr.db.json.FieldPathHelper;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.ojai.FieldPath;
 
-public class TestFieldPathHelper {
+public class TestFieldPathHelper extends BaseTest {
 
   @Test
   public void simeTests() {
diff --git a/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/HBaseTestsSuite.java b/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/HBaseTestsSuite.java
index 8460647..39c1e3b 100644
--- a/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/HBaseTestsSuite.java
+++ b/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/HBaseTestsSuite.java
@@ -22,9 +22,8 @@ import java.lang.management.ManagementFactory;
 import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.drill.exec.ZookeeperTestUtil;
-import org.apache.drill.common.util.GuavaPatcher;
-import org.apache.drill.common.util.ProtobufPatcher;
 import org.apache.drill.hbase.test.Drill2130StorageHBaseHamcrestConfigurationTest;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
@@ -53,14 +52,9 @@ import org.junit.runners.Suite.SuiteClasses;
   TestHBaseTableProvider.class,
   TestOrderedBytesConvertFunctions.class
 })
-public class HBaseTestsSuite {
+public class HBaseTestsSuite extends BaseTest {
   static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(HBaseTestsSuite.class);
 
-  static {
-    ProtobufPatcher.patch();
-    GuavaPatcher.patch();
-  }
-
   private static final boolean IS_DEBUG = ManagementFactory.getRuntimeMXBean().getInputArguments().toString().indexOf("-agentlib:jdwp") > 0;
 
   protected static final TableName TEST_TABLE_1 = TableName.valueOf("TestTable1");
diff --git a/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/test/Drill2130StorageHBaseHamcrestConfigurationTest.java b/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/test/Drill2130StorageHBaseHamcrestConfigurationTest.java
index 1936285..f74f399 100644
--- a/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/test/Drill2130StorageHBaseHamcrestConfigurationTest.java
+++ b/contrib/storage-hbase/src/test/java/org/apache/drill/hbase/test/Drill2130StorageHBaseHamcrestConfigurationTest.java
@@ -21,9 +21,10 @@ import static org.hamcrest.CoreMatchers.equalTo;
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
-public class Drill2130StorageHBaseHamcrestConfigurationTest {
+public class Drill2130StorageHBaseHamcrestConfigurationTest extends BaseTest {
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(Drill2130StorageHBaseHamcrestConfigurationTest.class);
 
   @SuppressWarnings("unused")
diff --git a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/inspectors/SkipFooterRecordsInspectorTest.java b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/inspectors/SkipFooterRecordsInspectorTest.java
index 68a3878..31518cf 100644
--- a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/inspectors/SkipFooterRecordsInspectorTest.java
+++ b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/inspectors/SkipFooterRecordsInspectorTest.java
@@ -18,6 +18,7 @@
 package org.apache.drill.exec.store.hive.inspectors;
 
 import org.apache.drill.exec.store.hive.readers.inspectors.SkipFooterRecordsInspector;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.mapred.RecordReader;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -27,7 +28,7 @@ import static org.junit.Assert.assertNull;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
-public class SkipFooterRecordsInspectorTest {
+public class SkipFooterRecordsInspectorTest extends BaseTest {
 
   private static RecordReader<Object, Object> recordReader;
 
diff --git a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestColumnListCache.java b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestColumnListCache.java
index d0bfb33..bdec132 100644
--- a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestColumnListCache.java
+++ b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestColumnListCache.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.store.hive.schema;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.drill.exec.store.hive.ColumnListsCache;
 import org.apache.drill.categories.SlowTest;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -30,7 +31,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
 
 @Category({SlowTest.class})
-public class TestColumnListCache {
+public class TestColumnListCache extends BaseTest {
 
   @Test
   public void testTableColumnsIndex() {
diff --git a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestSchemaConversion.java b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestSchemaConversion.java
index 7154c1b..62f3451 100644
--- a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestSchemaConversion.java
+++ b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/store/hive/schema/TestSchemaConversion.java
@@ -27,6 +27,7 @@ import org.apache.drill.exec.record.metadata.ColumnMetadata;
 import org.apache.drill.exec.record.metadata.PrimitiveColumnMetadata;
 import org.apache.drill.exec.record.metadata.SchemaBuilder;
 import org.apache.drill.exec.store.hive.HiveUtilities;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.junit.Rule;
 import org.junit.Test;
@@ -36,7 +37,7 @@ import org.junit.rules.ExpectedException;
 import static org.junit.Assert.assertEquals;
 
 @Category({SlowTest.class})
-public class TestSchemaConversion {
+public class TestSchemaConversion extends BaseTest {
   private static final HiveToRelDataTypeConverter dataTypeConverter
       = new HiveToRelDataTypeConverter(new SqlTypeFactoryImpl(new DrillRelDataTypeSystem()));
 
diff --git a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/test/Drill2130StorageHiveCoreHamcrestConfigurationTest.java b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/test/Drill2130StorageHiveCoreHamcrestConfigurationTest.java
index 12adf92..66037e0 100644
--- a/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/test/Drill2130StorageHiveCoreHamcrestConfigurationTest.java
+++ b/contrib/storage-hive/core/src/test/java/org/apache/drill/exec/test/Drill2130StorageHiveCoreHamcrestConfigurationTest.java
@@ -17,13 +17,14 @@
  */
 package org.apache.drill.exec.test;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 import static org.hamcrest.CoreMatchers.equalTo;
 
-public class Drill2130StorageHiveCoreHamcrestConfigurationTest {
+public class Drill2130StorageHiveCoreHamcrestConfigurationTest extends BaseTest {
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(Drill2130StorageHiveCoreHamcrestConfigurationTest.class);
 
   @SuppressWarnings("unused")
diff --git a/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/TestKafkaSuit.java b/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/TestKafkaSuit.java
index b586d7d..c996c38 100644
--- a/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/TestKafkaSuit.java
+++ b/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/TestKafkaSuit.java
@@ -24,6 +24,7 @@ import org.apache.drill.categories.SlowTest;
 import org.apache.drill.exec.ZookeeperTestUtil;
 import org.apache.drill.exec.store.kafka.cluster.EmbeddedKafkaCluster;
 import org.apache.drill.exec.store.kafka.decoders.MessageReaderFactoryTest;
+import org.apache.drill.test.BaseTest;
 import org.apache.kafka.clients.admin.AdminClient;
 import org.apache.kafka.clients.admin.CreateTopicsResult;
 import org.apache.kafka.clients.admin.NewTopic;
@@ -50,7 +51,7 @@ import java.util.concurrent.atomic.AtomicInteger;
 @Category({KafkaStorageTest.class, SlowTest.class})
 @RunWith(Suite.class)
 @SuiteClasses({KafkaQueriesTest.class, MessageIteratorTest.class, MessageReaderFactoryTest.class, KafkaFilterPushdownTest.class})
-public class TestKafkaSuit {
+public class TestKafkaSuit extends BaseTest {
   private static final Logger logger = LoggerFactory.getLogger(LoggerFactory.class);
 
   private static final String LOGIN_CONF_RESOURCE_PATHNAME = "login.conf";
diff --git a/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/decoders/MessageReaderFactoryTest.java b/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/decoders/MessageReaderFactoryTest.java
index 1b8aa11..3c4779d 100644
--- a/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/decoders/MessageReaderFactoryTest.java
+++ b/contrib/storage-kafka/src/test/java/org/apache/drill/exec/store/kafka/decoders/MessageReaderFactoryTest.java
@@ -20,12 +20,13 @@ package org.apache.drill.exec.store.kafka.decoders;
 import org.apache.drill.categories.KafkaStorageTest;
 import org.apache.drill.common.exceptions.UserException;
 import org.apache.drill.exec.proto.UserBitShared.DrillPBError.ErrorType;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category({KafkaStorageTest.class})
-public class MessageReaderFactoryTest {
+public class MessageReaderFactoryTest extends BaseTest {
 
   @Test
   public void testShouldThrowExceptionAsMessageReaderIsNull() {
diff --git a/contrib/storage-kudu/src/test/java/org/apache/drill/store/kudu/TestKuduConnect.java b/contrib/storage-kudu/src/test/java/org/apache/drill/store/kudu/TestKuduConnect.java
index ad1e795..f75e983 100644
--- a/contrib/storage-kudu/src/test/java/org/apache/drill/store/kudu/TestKuduConnect.java
+++ b/contrib/storage-kudu/src/test/java/org/apache/drill/store/kudu/TestKuduConnect.java
@@ -22,6 +22,7 @@ import java.util.Arrays;
 import java.util.List;
 
 import org.apache.drill.categories.KuduStorageTest;
+import org.apache.drill.test.BaseTest;
 import org.junit.Ignore;
 import org.junit.Test;
 import org.apache.kudu.ColumnSchema;
@@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category;
 
 @Ignore("requires remote kudu server")
 @Category(KuduStorageTest.class)
-public class TestKuduConnect {
+public class TestKuduConnect extends BaseTest {
   static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(TestKuduConnect.class);
 
   public static final String KUDU_MASTER = "172.31.1.99";
diff --git a/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/MongoTestSuit.java b/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/MongoTestSuit.java
index 7910a57..9dce09d 100644
--- a/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/MongoTestSuit.java
+++ b/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/MongoTestSuit.java
@@ -27,6 +27,7 @@ import java.util.concurrent.atomic.AtomicInteger;
 import org.apache.drill.categories.MongoStorageTest;
 import org.apache.drill.categories.SlowTest;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
+import org.apache.drill.test.BaseTest;
 import org.bson.Document;
 import org.bson.conversions.Bson;
 import org.junit.AfterClass;
@@ -67,7 +68,7 @@ import de.flapdoodle.embed.process.runtime.Network;
 @SuiteClasses({ TestMongoFilterPushDown.class, TestMongoProjectPushDown.class,
     TestMongoQueries.class, TestMongoChunkAssignment.class })
 @Category({SlowTest.class, MongoStorageTest.class})
-public class MongoTestSuit implements MongoTestConstants {
+public class MongoTestSuit extends BaseTest implements MongoTestConstants {
 
   private static final Logger logger = LoggerFactory.getLogger(MongoTestSuit.class);
   protected static MongoClient mongoClient;
diff --git a/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/TestMongoChunkAssignment.java b/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/TestMongoChunkAssignment.java
index 0f0b563..e4c71b1 100644
--- a/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/TestMongoChunkAssignment.java
+++ b/contrib/storage-mongo/src/test/java/org/apache/drill/exec/store/mongo/TestMongoChunkAssignment.java
@@ -34,6 +34,7 @@ import org.apache.drill.exec.store.mongo.common.ChunkInfo;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.drill.shaded.guava.com.google.common.collect.Maps;
 import org.apache.drill.shaded.guava.com.google.common.collect.Sets;
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category;
 import com.mongodb.ServerAddress;
 
 @Category({SlowTest.class, MongoStorageTest.class})
-public class TestMongoChunkAssignment {
+public class TestMongoChunkAssignment extends BaseTest {
   static final String HOST_A = "A";
   static final String HOST_B = "B";
   static final String HOST_C = "C";
diff --git a/drill-yarn/src/main/java/org/apache/drill/yarn/appMaster/DrillApplicationMaster.java b/drill-yarn/src/main/java/org/apache/drill/yarn/appMaster/DrillApplicationMaster.java
index e32a531..e144bff 100644
--- a/drill-yarn/src/main/java/org/apache/drill/yarn/appMaster/DrillApplicationMaster.java
+++ b/drill-yarn/src/main/java/org/apache/drill/yarn/appMaster/DrillApplicationMaster.java
@@ -55,9 +55,8 @@ public class DrillApplicationMaster {
      */
     ProtobufPatcher.patch();
     /*
-     * HBase client uses older version of Guava's Stopwatch API,
-     * while Drill ships with 18.x which has changes the scope of
-     * these API to 'package', this code make them accessible.
+     * Some libraries, such as Hadoop or HBase, depend on incompatible versions of Guava.
+     * This code adds back some methods to so that the libraries can work with single Guava version.
      */
     GuavaPatcher.patch();
   }
diff --git a/drill-yarn/src/main/java/org/apache/drill/yarn/client/DrillOnYarn.java b/drill-yarn/src/main/java/org/apache/drill/yarn/client/DrillOnYarn.java
index 54900d3..5f79f5d 100644
--- a/drill-yarn/src/main/java/org/apache/drill/yarn/client/DrillOnYarn.java
+++ b/drill-yarn/src/main/java/org/apache/drill/yarn/client/DrillOnYarn.java
@@ -84,9 +84,8 @@ public class DrillOnYarn {
      */
     ProtobufPatcher.patch();
     /*
-     * HBase client uses older version of Guava's Stopwatch API,
-     * while Drill ships with 18.x which has changes the scope of
-     * these API to 'package', this code make them accessible.
+     * Some libraries, such as Hadoop or HBase, depend on incompatible versions of Guava.
+     * This code adds back some methods to so that the libraries can work with single Guava version.
      */
     GuavaPatcher.patch();
   }
diff --git a/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestClient.java b/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestClient.java
index 8be3148..fb8727e 100644
--- a/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestClient.java
+++ b/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestClient.java
@@ -24,9 +24,10 @@ import java.io.ByteArrayOutputStream;
 import java.io.PrintStream;
 import java.io.UnsupportedEncodingException;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
-public class TestClient {
+public class TestClient extends BaseTest {
 
   /**
    * Unchecked exception to allow capturing "exit" events without actually
diff --git a/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestCommandLineOptions.java b/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestCommandLineOptions.java
index e6c220b..7b1bbe5 100644
--- a/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestCommandLineOptions.java
+++ b/drill-yarn/src/test/java/org/apache/drill/yarn/client/TestCommandLineOptions.java
@@ -17,12 +17,13 @@
  */
 package org.apache.drill.yarn.client;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
 
-public class TestCommandLineOptions {
+public class TestCommandLineOptions extends BaseTest {
   @Test
   public void testOptions() {
     CommandLineOptions opts = new CommandLineOptions();
diff --git a/drill-yarn/src/test/java/org/apache/drill/yarn/core/TestConfig.java b/drill-yarn/src/test/java/org/apache/drill/yarn/core/TestConfig.java
index b2ec786..1174dc3 100644
--- a/drill-yarn/src/test/java/org/apache/drill/yarn/core/TestConfig.java
+++ b/drill-yarn/src/test/java/org/apache/drill/yarn/core/TestConfig.java
@@ -33,11 +33,12 @@ import java.util.HashMap;
 import java.util.Map;
 
 import org.apache.commons.io.FileUtils;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import com.typesafe.config.Config;
 
-public class TestConfig {
+public class TestConfig extends BaseTest {
 
   /**
    * Mock config that lets us tinker with loading and environment access for
diff --git a/drill-yarn/src/test/java/org/apache/drill/yarn/scripts/TestScripts.java b/drill-yarn/src/test/java/org/apache/drill/yarn/scripts/TestScripts.java
index 846024e..8422c62 100644
--- a/drill-yarn/src/test/java/org/apache/drill/yarn/scripts/TestScripts.java
+++ b/drill-yarn/src/test/java/org/apache/drill/yarn/scripts/TestScripts.java
@@ -30,6 +30,7 @@ import java.nio.file.Files;
 import java.util.HashMap;
 import java.util.Map;
 
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.yarn.scripts.ScriptUtils.DrillbitRun;
 import org.apache.drill.yarn.scripts.ScriptUtils.RunResult;
 import org.apache.drill.yarn.scripts.ScriptUtils.ScriptRunner;
@@ -50,7 +51,7 @@ import org.junit.Test;
 
 // Turned of by default: works only in a developer setup
 @Ignore
-public class TestScripts {
+public class TestScripts extends BaseTest {
   static ScriptUtils context;
 
   @BeforeClass
diff --git a/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestAmRegistration.java b/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestAmRegistration.java
index c8d940e..d0de57d 100644
--- a/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestAmRegistration.java
+++ b/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestAmRegistration.java
@@ -21,10 +21,11 @@ import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 import org.apache.curator.test.TestingServer;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.yarn.appMaster.AMRegistrar.AMRegistrationException;
 import org.junit.Test;
 
-public class TestAmRegistration {
+public class TestAmRegistration extends BaseTest {
   private static final String TEST_CLUSTER_ID = "drillbits";
   private static final String TEST_ZK_ROOT = "drill";
   private static final String TEST_AM_HOST = "localhost";
diff --git a/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestZkRegistry.java b/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestZkRegistry.java
index 4263b01..654ffad 100644
--- a/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestZkRegistry.java
+++ b/drill-yarn/src/test/java/org/apache/drill/yarn/zk/TestZkRegistry.java
@@ -34,6 +34,7 @@ import org.apache.curator.x.discovery.ServiceInstance;
 import org.apache.drill.exec.coord.DrillServiceInstanceHelper;
 import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
 import org.apache.drill.exec.work.foreman.DrillbitStatusListener;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.yarn.appMaster.EventContext;
 import org.apache.drill.yarn.appMaster.RegistryHandler;
 import org.apache.drill.yarn.appMaster.Task;
@@ -47,7 +48,7 @@ import org.junit.Test;
  * test in isolation using the Curator-provided test server.
  */
 
-public class TestZkRegistry {
+public class TestZkRegistry extends BaseTest {
   private static final String BARNEY_HOST = "barney";
   private static final String WILMA_HOST = "wilma";
   private static final String TEST_HOST = "host";
diff --git a/exec/interpreter/src/test/java/org/apache/drill/exec/expr/TestPrune.java b/exec/interpreter/src/test/java/org/apache/drill/exec/expr/TestPrune.java
deleted file mode 100644
index 565ab33..0000000
--- a/exec/interpreter/src/test/java/org/apache/drill/exec/expr/TestPrune.java
+++ /dev/null
@@ -1,38 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.drill.exec.expr;
-
-import org.apache.drill.BaseTestQuery;
-import org.apache.drill.common.util.TestTools;
-import org.junit.Test;
-
-public class TestPrune extends BaseTestQuery {
-
-  String MULTILEVEL = TestTools.getWorkingPath() + "/../java-exec/src/test/resources/multilevel";
-
-  @Test
-  public void pruneCompound() throws Exception {
-    test(String.format("select * from dfs.`%s/csv` where x is null and dir1 in ('Q1', 'Q2')", MULTILEVEL));
-  }
-
-  @Test
-  public void pruneSimple() throws Exception {
-    test(String.format("select * from dfs.`%s/csv` where dir1 in ('Q1', 'Q2')", MULTILEVEL));
-  }
-
-}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/server/Drillbit.java b/exec/java-exec/src/main/java/org/apache/drill/exec/server/Drillbit.java
index 5ff9fda..fb0ff0f 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/server/Drillbit.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/server/Drillbit.java
@@ -85,9 +85,8 @@ public class Drillbit implements AutoCloseable {
      */
     ProtobufPatcher.patch();
     /*
-     * HBase client uses older version of Guava's Stopwatch API,
-     * while Drill ships with 18.x which has changes the scope of
-     * these API to 'package', this code make them accessible.
+     * Some libraries, such as Hadoop or HBase, depend on incompatible versions of Guava.
+     * This code adds back some methods to so that the libraries can work with single Guava version.
      */
     GuavaPatcher.patch();
     Environment.logEnv("Drillbit environment: ", logger);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/BaseTestInheritance.java b/exec/java-exec/src/test/java/org/apache/drill/BaseTestInheritance.java
new file mode 100644
index 0000000..4ae7e88
--- /dev/null
+++ b/exec/java-exec/src/test/java/org/apache/drill/BaseTestInheritance.java
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill;
+
+import org.apache.drill.categories.UnlikelyTest;
+import org.apache.drill.test.BaseTest;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.reflections.Reflections;
+import org.reflections.scanners.SubTypesScanner;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Set;
+import java.util.stream.Collectors;
+
+public class BaseTestInheritance extends BaseTest {
+
+  @Test
+  @Category(UnlikelyTest.class)
+  public void verifyInheritance() {
+    // Get all BaseTest inheritors
+    Reflections reflections = new Reflections("org.apache.drill", new SubTypesScanner(false));
+    Set<Class<? extends BaseTest>> baseTestInheritors = reflections.getSubTypesOf(BaseTest.class);
+
+    // Get all tests that are not inherited from BaseTest
+    Set<String> testClasses = reflections.getSubTypesOf(Object.class).stream()
+        .filter(c -> !c.isInterface())
+        .filter(c -> c.getSimpleName().toLowerCase().contains("test"))
+        .filter(c -> Arrays.stream(c.getDeclaredMethods())
+                .anyMatch(m -> m.getAnnotation(Test.class) != null))
+        .filter(c -> !baseTestInheritors.contains(c))
+        .map(Class::getName)
+        .collect(Collectors.toSet());
+
+    Assert.assertEquals("Found test classes that are not inherited from BaseTest:", Collections.emptySet(), testClasses);
+  }
+}
diff --git a/exec/java-exec/src/test/java/org/apache/drill/TestImplicitCasting.java b/exec/java-exec/src/test/java/org/apache/drill/TestImplicitCasting.java
index 83f0513..4994bb2 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/TestImplicitCasting.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/TestImplicitCasting.java
@@ -20,6 +20,7 @@ package org.apache.drill;
 import org.apache.drill.categories.SqlTest;
 import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.exec.resolver.TypeCastRules;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
@@ -30,7 +31,7 @@ import java.util.List;
 import static org.junit.Assert.assertEquals;
 
 @Category(SqlTest.class)
-public class TestImplicitCasting {
+public class TestImplicitCasting extends BaseTest {
   @Test
   public void testTimeStampAndTime() {
     final List<TypeProtos.MinorType> inputTypes = Lists.newArrayList();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/common/scanner/TestClassPathScanner.java b/exec/java-exec/src/test/java/org/apache/drill/common/scanner/TestClassPathScanner.java
index 01b1b3b..288f14c 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/common/scanner/TestClassPathScanner.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/common/scanner/TestClassPathScanner.java
@@ -44,13 +44,14 @@ import org.apache.drill.exec.physical.base.PhysicalOperator;
 import org.apache.drill.exec.store.SystemPlugin;
 import org.apache.drill.exec.store.ischema.InfoSchemaStoragePlugin;
 import org.apache.drill.exec.store.sys.SystemTablePlugin;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category({SlowTest.class})
-public class TestClassPathScanner {
+public class TestClassPathScanner extends BaseTest {
 
   private static ScanResult result;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/TestOpSerialization.java b/exec/java-exec/src/test/java/org/apache/drill/exec/TestOpSerialization.java
index 7fb7bdb..f8d7982 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/TestOpSerialization.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/TestOpSerialization.java
@@ -44,13 +44,14 @@ import org.apache.drill.exec.store.direct.DirectSubScan;
 import org.apache.drill.exec.store.mock.MockSubScanPOP;
 import org.apache.drill.exec.store.pojo.DynamicPojoRecordReader;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.Test;
 
 import com.fasterxml.jackson.databind.ObjectWriter;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 
-public class TestOpSerialization {
+public class TestOpSerialization extends BaseTest {
   static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(TestOpSerialization.class);
   private DrillConfig config;
   private PhysicalPlanReader reader;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/TestSSLConfig.java b/exec/java-exec/src/test/java/org/apache/drill/exec/TestSSLConfig.java
index 8508080..20a930f 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/TestSSLConfig.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/TestSSLConfig.java
@@ -22,6 +22,7 @@ import org.apache.drill.categories.SecurityTest;
 import org.apache.drill.common.exceptions.DrillException;
 import org.apache.drill.exec.ssl.SSLConfig;
 import org.apache.drill.exec.ssl.SSLConfigBuilder;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.test.ConfigBuilder;
 import org.apache.hadoop.conf.Configuration;
 import org.junit.Test;
@@ -36,7 +37,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 @Category(SecurityTest.class)
-public class TestSSLConfig {
+public class TestSSLConfig extends BaseTest {
 
   @Test
   public void testMissingKeystorePath() throws Exception {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/client/ConnectTriesPropertyTestClusterBits.java b/exec/java-exec/src/test/java/org/apache/drill/exec/client/ConnectTriesPropertyTestClusterBits.java
index 0fea090..dad73cc 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/client/ConnectTriesPropertyTestClusterBits.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/client/ConnectTriesPropertyTestClusterBits.java
@@ -33,6 +33,7 @@ import org.apache.drill.exec.server.Drillbit;
 
 import org.apache.drill.exec.server.RemoteServiceSet;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -40,7 +41,7 @@ import org.junit.Test;
 import static junit.framework.TestCase.assertTrue;
 import static junit.framework.TestCase.fail;
 
-public class ConnectTriesPropertyTestClusterBits {
+public class ConnectTriesPropertyTestClusterBits extends BaseTest {
 
   public static StringBuilder bitInfo;
   public static final String fakeBitsInfo = "127.0.0.1:5000,127.0.0.1:5001";
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/client/DrillSqlLineApplicationTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/client/DrillSqlLineApplicationTest.java
index 945df70..f75e6df 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/client/DrillSqlLineApplicationTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/client/DrillSqlLineApplicationTest.java
@@ -18,6 +18,7 @@
 package org.apache.drill.exec.client;
 
 import org.apache.drill.common.util.DrillVersionInfo;
+import org.apache.drill.test.BaseTest;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import sqlline.Application;
@@ -38,7 +39,7 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.assertTrue;
 
-public class DrillSqlLineApplicationTest {
+public class DrillSqlLineApplicationTest extends BaseTest {
 
   private static Application application;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/compile/TestEvaluationVisitor.java b/exec/java-exec/src/test/java/org/apache/drill/exec/compile/TestEvaluationVisitor.java
index 68d165e..4cd5a46 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/compile/TestEvaluationVisitor.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/compile/TestEvaluationVisitor.java
@@ -27,9 +27,10 @@ import org.apache.drill.exec.expr.ValueVectorReadExpression;
 import org.apache.drill.exec.expr.ValueVectorWriteExpression;
 import org.apache.drill.exec.physical.impl.project.Projector;
 import org.apache.drill.exec.record.TypedFieldId;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
-public class TestEvaluationVisitor {
+public class TestEvaluationVisitor extends BaseTest {
 
   @Test
   public void testEvaluation() {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEphemeralStore.java b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEphemeralStore.java
index 2cd79ca..81a4c70 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEphemeralStore.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEphemeralStore.java
@@ -30,13 +30,14 @@ import org.apache.curator.test.TestingServer;
 import org.apache.drill.exec.ZookeeperTestUtil;
 import org.apache.drill.exec.coord.store.TransientStoreConfig;
 import org.apache.drill.exec.serialization.InstanceSerializer;
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 import org.mockito.Mockito;
 
-public class TestEphemeralStore {
+public class TestEphemeralStore extends BaseTest {
   private final static String root = "/test";
   private final static String path = "test-key";
   private final static String value = "testing";
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEventDispatcher.java b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEventDispatcher.java
index f1465a2..3f898a1 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEventDispatcher.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestEventDispatcher.java
@@ -24,12 +24,13 @@ import org.apache.curator.framework.recipes.cache.PathChildrenCacheEvent;
 import org.apache.drill.exec.coord.store.TransientStoreConfig;
 import org.apache.drill.exec.coord.store.TransientStoreEvent;
 import org.apache.drill.exec.serialization.InstanceSerializer;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 import org.mockito.Mockito;
 
-public class TestEventDispatcher {
+public class TestEventDispatcher extends BaseTest {
 
   private final static String key = "some-key";
   private final static String value = "some-data";
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestPathUtils.java b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestPathUtils.java
index f086e5f..203f1b8 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestPathUtils.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestPathUtils.java
@@ -17,10 +17,11 @@
  */
 package org.apache.drill.exec.coord.zk;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestPathUtils {
+public class TestPathUtils extends BaseTest {
 
   @Test(expected = NullPointerException.class)
   public void testNullSegmentThrowsNPE() {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZKACL.java b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZKACL.java
index 07e0465..53da4f5 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZKACL.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZKACL.java
@@ -31,6 +31,7 @@ import org.apache.drill.common.scanner.persistence.ScanResult;
 import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.server.BootStrapContext;
 import org.apache.drill.exec.server.options.SystemOptionManager;
+import org.apache.drill.test.BaseTest;
 import org.apache.zookeeper.data.ACL;
 import org.junit.After;
 import org.junit.Assert;
@@ -43,7 +44,7 @@ import java.util.List;
 
 @Ignore("See DRILL-6823")
 @Category(SecurityTest.class)
-public class TestZKACL {
+public class TestZKACL extends BaseTest {
 
   private TestingServer server;
   private final static String cluster_config_znode = "test-cluster_config_znode";
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZookeeperClient.java b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZookeeperClient.java
index 6dfbd22..fc4bfa1 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZookeeperClient.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/coord/zk/TestZookeeperClient.java
@@ -34,6 +34,7 @@ import org.apache.drill.common.exceptions.DrillRuntimeException;
 import org.apache.drill.exec.ZookeeperTestUtil;
 import org.apache.drill.exec.exception.VersionMismatchException;
 import org.apache.drill.exec.store.sys.store.DataChangeVersion;
+import org.apache.drill.test.BaseTest;
 import org.apache.zookeeper.CreateMode;
 import org.junit.After;
 import org.junit.Assert;
@@ -46,7 +47,7 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
-public class TestZookeeperClient {
+public class TestZookeeperClient extends BaseTest {
   private final static String root = "/test";
   private final static String path = "test-key";
   private final static String abspath = PathUtils.join(root, path);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/dotdrill/TestDotDrillUtil.java b/exec/java-exec/src/test/java/org/apache/drill/exec/dotdrill/TestDotDrillUtil.java
index 1866c9c..ac5c0a9 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/dotdrill/TestDotDrillUtil.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/dotdrill/TestDotDrillUtil.java
@@ -26,6 +26,7 @@ import static org.junit.Assert.assertTrue;
 
 import org.apache.drill.exec.store.dfs.DrillFileSystem;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -34,7 +35,7 @@ import org.junit.BeforeClass;
 import org.junit.ClassRule;
 import org.junit.Test;
 
-public class TestDotDrillUtil {
+public class TestDotDrillUtil extends BaseTest {
 
   private static File tempDir;
   private static Path tempPath;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/FunctionInitializerTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/FunctionInitializerTest.java
index 4185206..292fc41 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/FunctionInitializerTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/FunctionInitializerTest.java
@@ -21,6 +21,7 @@ import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.drill.categories.SqlFunctionTest;
 import org.apache.drill.exec.udf.dynamic.JarBuilder;
 import org.apache.drill.exec.util.JarUtil;
+import org.apache.drill.test.BaseTest;
 import org.codehaus.janino.Java.CompilationUnit;
 import org.junit.BeforeClass;
 import org.junit.ClassRule;
@@ -47,7 +48,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 @Category(SqlFunctionTest.class)
-public class FunctionInitializerTest {
+public class FunctionInitializerTest extends BaseTest {
 
   @ClassRule
   public static final TemporaryFolder temporaryFolder = new TemporaryFolder();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestSqlPatterns.java b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestSqlPatterns.java
index 31da5c9..9e5aae6 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestSqlPatterns.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestSqlPatterns.java
@@ -30,13 +30,14 @@ import java.util.List;
 import org.apache.drill.common.exceptions.DrillRuntimeException;
 import org.apache.drill.exec.memory.BufferAllocator;
 import org.apache.drill.exec.memory.RootAllocatorFactory;
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import io.netty.buffer.DrillBuf;
 
-public class TestSqlPatterns {
+public class TestSqlPatterns extends BaseTest {
   BufferAllocator allocator;
   DrillBuf drillBuf;
   CharsetEncoder charsetEncoder;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/registry/FunctionRegistryHolderTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/registry/FunctionRegistryHolderTest.java
index 3e8e6b8..6004b96 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/registry/FunctionRegistryHolderTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/registry/FunctionRegistryHolderTest.java
@@ -21,6 +21,7 @@ import org.apache.drill.shaded.guava.com.google.common.collect.ArrayListMultimap
 import org.apache.drill.shaded.guava.com.google.common.collect.ListMultimap;
 import org.apache.drill.categories.SqlFunctionTest;
 import org.apache.drill.exec.expr.fn.DrillFuncHolder;
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -45,7 +46,7 @@ import static org.junit.Assert.assertTrue;
 import static org.mockito.Mockito.mock;
 
 @Category(SqlFunctionTest.class)
-public class FunctionRegistryHolderTest {
+public class FunctionRegistryHolderTest extends BaseTest {
 
   private static final String built_in = "built-in";
   private static final String udf_jar = "DrillUDF-1.0.jar";
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashPartitionTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashPartitionTest.java
index 1189b9e..ce79467 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashPartitionTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashPartitionTest.java
@@ -46,6 +46,7 @@ import org.apache.drill.exec.record.CloseableRecordBatch;
 import org.apache.drill.exec.record.MaterializedField;
 import org.apache.drill.exec.record.RecordBatch;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.test.OperatorFixture;
 import org.apache.drill.exec.physical.rowSet.DirectRowSet;
 import org.apache.drill.exec.physical.rowSet.RowSet;
@@ -57,7 +58,7 @@ import org.junit.Test;
 
 import java.util.List;
 
-public class HashPartitionTest {
+public class HashPartitionTest extends BaseTest {
   @Rule
   public final BaseDirTestWatcher dirTestWatcher = new BaseDirTestWatcher();
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashTableAllocationTrackerTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashTableAllocationTrackerTest.java
index 2445500..246084d 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashTableAllocationTrackerTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/common/HashTableAllocationTrackerTest.java
@@ -18,12 +18,13 @@
 package org.apache.drill.exec.physical.impl.common;
 
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
 import static org.apache.drill.exec.physical.impl.common.HashTable.BATCH_SIZE;
 
-public class HashTableAllocationTrackerTest {
+public class HashTableAllocationTrackerTest extends BaseTest {
   @Test
   public void testDoubleGetNextCall() {
     final HashTableConfig config = new HashTableConfig(100, true, .5f, Lists.newArrayList(), Lists.newArrayList(), Lists.newArrayList());
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBatchSizePredictorImpl.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBatchSizePredictorImpl.java
index e16cdf6..b436677 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBatchSizePredictorImpl.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBatchSizePredictorImpl.java
@@ -18,10 +18,11 @@
 package org.apache.drill.exec.physical.impl.join;
 
 import org.apache.drill.exec.vector.IntVector;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestBatchSizePredictorImpl {
+public class TestBatchSizePredictorImpl extends BaseTest {
   @Test
   public void testComputeMaxBatchSizeHash()
   {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBuildSidePartitioningImpl.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBuildSidePartitioningImpl.java
index bff28a8..1ccc2cd 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBuildSidePartitioningImpl.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestBuildSidePartitioningImpl.java
@@ -20,10 +20,11 @@ package org.apache.drill.exec.physical.impl.join;
 import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
 import org.apache.drill.common.map.CaseInsensitiveMap;
 import org.apache.drill.exec.record.RecordBatch;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestBuildSidePartitioningImpl {
+public class TestBuildSidePartitioningImpl extends BaseTest {
   @Test
   public void testSimpleReserveMemoryCalculationNoHashFirstCycle() {
     testSimpleReserveMemoryCalculationNoHashHelper(true);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinHelperSizeCalculatorImpl.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinHelperSizeCalculatorImpl.java
index 5f7a36a..ac87193 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinHelperSizeCalculatorImpl.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinHelperSizeCalculatorImpl.java
@@ -20,10 +20,11 @@ package org.apache.drill.exec.physical.impl.join;
 import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.exec.expr.TypeHelper;
 import org.apache.drill.exec.record.RecordBatch;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestHashJoinHelperSizeCalculatorImpl {
+public class TestHashJoinHelperSizeCalculatorImpl extends BaseTest {
   @Test
   public void simpleCalculateSize() {
     final long intSize =
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinMemoryCalculator.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinMemoryCalculator.java
index b13829b..6e2b9d6 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinMemoryCalculator.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashJoinMemoryCalculator.java
@@ -17,9 +17,10 @@
  */
 package org.apache.drill.exec.physical.impl.join;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
-public class TestHashJoinMemoryCalculator {
+public class TestHashJoinMemoryCalculator extends BaseTest {
   @Test // Make sure no exception is thrown
   public void testMakeDebugString()
   {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorConservativeImpl.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorConservativeImpl.java
index 3203a45..e00f935 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorConservativeImpl.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorConservativeImpl.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.physical.impl.join;
 import org.apache.drill.shaded.guava.com.google.common.collect.Maps;
 import org.apache.drill.exec.record.RecordBatchSizer;
 import org.apache.drill.exec.vector.UInt4Vector;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -28,7 +29,7 @@ import java.util.Map;
 /**
  * This is a test for the more conservative hash table memory calculator {@link HashTableSizeCalculatorConservativeImpl}.
  */
-public class TestHashTableSizeCalculatorConservativeImpl {
+public class TestHashTableSizeCalculatorConservativeImpl extends BaseTest {
   @Test
   public void testCalculateHashTableSize() {
     final int maxNumRecords = 40;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorLeanImpl.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorLeanImpl.java
index 82121b9..b2260d4 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorLeanImpl.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestHashTableSizeCalculatorLeanImpl.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.physical.impl.join;
 import org.apache.drill.shaded.guava.com.google.common.collect.Maps;
 import org.apache.drill.exec.record.RecordBatchSizer;
 import org.apache.drill.exec.vector.UInt4Vector;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -28,7 +29,7 @@ import java.util.Map;
 /**
  * This is a test for the more accurate hash table memory calculator {@link HashTableSizeCalculatorLeanImpl}.
  */
-public class TestHashTableSizeCalculatorLeanImpl {
+public class TestHashTableSizeCalculatorLeanImpl extends BaseTest {
   @Test
   public void testCalculateHashTableSize() {
     final int maxNumRecords = 40;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPartitionStat.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPartitionStat.java
index 627d737..f36b6d5 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPartitionStat.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPartitionStat.java
@@ -17,11 +17,12 @@
  */
 package org.apache.drill.exec.physical.impl.join;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestPartitionStat
-{
+public class TestPartitionStat extends BaseTest {
+
   @Test
   public void simpleAddBatchTest() {
     final PartitionStatImpl partitionStat = new PartitionStatImpl();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPostBuildCalculationsImpl.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPostBuildCalculationsImpl.java
index 0636cb7..74448ff 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPostBuildCalculationsImpl.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/join/TestPostBuildCalculationsImpl.java
@@ -19,6 +19,7 @@ package org.apache.drill.exec.physical.impl.join;
 
 import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -26,7 +27,7 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
 
-public class TestPostBuildCalculationsImpl {
+public class TestPostBuildCalculationsImpl extends BaseTest {
   @Test
   public void testProbeTooBig() {
     final int minProbeRecordsPerBatch = 10;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/scan/project/projSet/TestProjectionSet.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/scan/project/projSet/TestProjectionSet.java
index 30aa74e..ed7ba0e 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/scan/project/projSet/TestProjectionSet.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/scan/project/projSet/TestProjectionSet.java
@@ -39,6 +39,7 @@ import org.apache.drill.exec.record.metadata.TupleMetadata;
 import org.apache.drill.exec.vector.accessor.convert.ColumnConversionFactory;
 import org.apache.drill.exec.vector.accessor.convert.ConvertStringToInt;
 import org.apache.drill.exec.vector.accessor.convert.StandardConversions;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -57,7 +58,7 @@ import org.junit.experimental.categories.Category;
  */
 
 @Category(RowSetTests.class)
-public class TestProjectionSet {
+public class TestProjectionSet extends BaseTest {
 
   /**
    * Empty projection, no schema
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/svremover/AbstractGenericCopierTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/svremover/AbstractGenericCopierTest.java
index 20f287a..30111a1 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/svremover/AbstractGenericCopierTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/svremover/AbstractGenericCopierTest.java
@@ -30,6 +30,7 @@ import org.apache.drill.exec.record.BatchSchemaBuilder;
 import org.apache.drill.exec.record.metadata.SchemaBuilder;
 import org.apache.drill.exec.vector.SchemaChangeCallBack;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.test.OperatorFixture;
 import org.apache.drill.exec.physical.rowSet.DirectRowSet;
 import org.apache.drill.exec.physical.rowSet.RowSet;
@@ -38,7 +39,7 @@ import org.apache.drill.test.rowSet.RowSetComparison;
 import org.junit.Rule;
 import org.junit.Test;
 
-public abstract class AbstractGenericCopierTest {
+public abstract class AbstractGenericCopierTest extends BaseTest {
   @Rule
   public final BaseDirTestWatcher baseDirTestWatcher = new BaseDirTestWatcher();
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectedTuple.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectedTuple.java
index b374603..c82870d 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectedTuple.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectedTuple.java
@@ -33,6 +33,7 @@ import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.exec.physical.resultSet.impl.RowSetTestUtils;
 import org.apache.drill.exec.physical.resultSet.project.RequestedTuple.RequestedColumn;
 import org.apache.drill.exec.physical.resultSet.project.RequestedTuple.TupleProjectionType;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -48,7 +49,7 @@ import org.junit.experimental.categories.Category;
  */
 
 @Category(RowSetTests.class)
-public class TestProjectedTuple {
+public class TestProjectedTuple extends BaseTest {
 
   @Test
   public void testProjectionAll() {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectionType.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectionType.java
index 88dd665..341e10d 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectionType.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/resultSet/project/TestProjectionType.java
@@ -24,11 +24,12 @@ import static org.junit.Assert.assertTrue;
 import org.apache.drill.categories.RowSetTests;
 import org.apache.drill.common.types.TypeProtos.MinorType;
 import org.apache.drill.common.types.Types;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(RowSetTests.class)
-public class TestProjectionType {
+public class TestProjectionType extends BaseTest {
 
   @Test
   public void testQueries() {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/common/TestNumericEquiDepthHistogram.java b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/common/TestNumericEquiDepthHistogram.java
index cd82e2a..965cb4a 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/common/TestNumericEquiDepthHistogram.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/common/TestNumericEquiDepthHistogram.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.planner.common;
 import org.apache.drill.categories.PlannerTest;
 
 import org.apache.drill.shaded.guava.com.google.common.collect.BoundType;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.junit.Assert;
@@ -27,7 +28,7 @@ import org.apache.drill.shaded.guava.com.google.common.collect.Range;
 
 
 @Category(PlannerTest.class)
-public class TestNumericEquiDepthHistogram {
+public class TestNumericEquiDepthHistogram extends BaseTest {
 
   @Test
   public void testHistogramWithUniqueEndpoints() throws Exception {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/fragment/TestHardAffinityFragmentParallelizer.java b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/fragment/TestHardAffinityFragmentParallelizer.java
index 696486f..c7be3ff 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/fragment/TestHardAffinityFragmentParallelizer.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/fragment/TestHardAffinityFragmentParallelizer.java
@@ -23,6 +23,7 @@ import org.apache.drill.categories.PlannerTest;
 import org.apache.drill.exec.physical.EndpointAffinity;
 import org.apache.drill.exec.physical.base.PhysicalOperator;
 import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -39,7 +40,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(PlannerTest.class)
-public class TestHardAffinityFragmentParallelizer {
+public class TestHardAffinityFragmentParallelizer extends BaseTest {
 
   // Create a set of test endpoints
   private static final DrillbitEndpoint N1_EP1 = newDrillbitEndpoint("node1", 30010);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/DrillOptiqTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/DrillOptiqTest.java
index 4df08fe..1f35bc6 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/DrillOptiqTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/DrillOptiqTest.java
@@ -31,6 +31,7 @@ import org.apache.calcite.sql.type.SqlTypeName;
 import org.apache.drill.categories.PlannerTest;
 import org.apache.drill.common.exceptions.UserException;
 import org.apache.drill.exec.planner.types.DrillRelDataTypeSystem;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -39,7 +40,7 @@ import java.util.LinkedList;
 import java.util.List;
 
 @Category(PlannerTest.class)
-public class DrillOptiqTest {
+public class DrillOptiqTest extends BaseTest {
 
   /* Method checks if we raise the appropriate error while dealing with RexNode that cannot be converted to
    * equivalent Drill expressions
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/FilterSplitTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/FilterSplitTest.java
index b70fc9b..d153dbf 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/FilterSplitTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/planner/logical/FilterSplitTest.java
@@ -31,9 +31,10 @@ import org.apache.calcite.rex.RexBuilder;
 import org.apache.calcite.rex.RexNode;
 import org.apache.calcite.sql.fun.SqlStdOperatorTable;
 import org.apache.calcite.sql.type.SqlTypeName;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
-public class FilterSplitTest {
+public class FilterSplitTest extends BaseTest {
   static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(FilterSplitTest.class);
 
   final JavaTypeFactory t = new JavaTypeFactoryImpl();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/record/TestMaterializedField.java b/exec/java-exec/src/test/java/org/apache/drill/exec/record/TestMaterializedField.java
index fc76f1c..5ab5a56 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/record/TestMaterializedField.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/record/TestMaterializedField.java
@@ -23,12 +23,13 @@ import org.apache.drill.common.types.Types;
 
 import static org.junit.Assert.assertTrue;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(VectorTest.class)
-public class TestMaterializedField {
+public class TestMaterializedField extends BaseTest {
 
   private static final String PARENT_NAME = "parent";
   private static final String PARENT_SECOND_NAME = "parent2";
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/record/metadata/schema/TestSchemaProvider.java b/exec/java-exec/src/test/java/org/apache/drill/exec/record/metadata/schema/TestSchemaProvider.java
index c07610d..b22703f 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/record/metadata/schema/TestSchemaProvider.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/record/metadata/schema/TestSchemaProvider.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.record.metadata.schema;
 import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.exec.record.metadata.ColumnMetadata;
 import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.test.BaseTest;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.ExpectedException;
@@ -41,7 +42,7 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
-public class TestSchemaProvider {
+public class TestSchemaProvider extends BaseTest {
 
   @Rule
   public TemporaryFolder folder = new TemporaryFolder();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/TestResourcePoolTree.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/TestResourcePoolTree.java
index 25ab990..be8203b 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/TestResourcePoolTree.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/TestResourcePoolTree.java
@@ -30,6 +30,7 @@ import org.apache.drill.exec.resourcemgr.config.ResourcePoolTree;
 import org.apache.drill.exec.resourcemgr.config.ResourcePoolTreeImpl;
 import org.apache.drill.exec.resourcemgr.config.exception.RMConfigException;
 import org.apache.drill.exec.server.options.OptionValue;
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -50,7 +51,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(ResourceManagerTest.class)
-public final class TestResourcePoolTree {
+public final class TestResourcePoolTree extends BaseTest {
 
   private static final Map<String, Object> poolTreeConfig = new HashMap<>();
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestBestFitSelectionPolicy.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestBestFitSelectionPolicy.java
index 3dcde00..0a281a0 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestBestFitSelectionPolicy.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestBestFitSelectionPolicy.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.resourcemgr.config.QueryQueueConfig;
 import org.apache.drill.exec.resourcemgr.config.RMCommonDefaults;
 import org.apache.drill.exec.resourcemgr.config.ResourcePool;
 import org.apache.drill.exec.resourcemgr.config.exception.QueueSelectionException;
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -38,7 +39,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(ResourceManagerTest.class)
-public final class TestBestFitSelectionPolicy {
+public final class TestBestFitSelectionPolicy extends BaseTest {
 
   private static QueueSelectionPolicy selectionPolicy;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestDefaultSelectionPolicy.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestDefaultSelectionPolicy.java
index a78d106..2685875 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestDefaultSelectionPolicy.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectionpolicy/TestDefaultSelectionPolicy.java
@@ -22,6 +22,7 @@ import org.apache.drill.exec.ops.QueryContext;
 import org.apache.drill.exec.proto.UserBitShared;
 import org.apache.drill.exec.resourcemgr.config.ResourcePool;
 import org.apache.drill.exec.resourcemgr.config.exception.QueueSelectionException;
+import org.apache.drill.test.BaseTest;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -34,7 +35,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(ResourceManagerTest.class)
-public final class TestDefaultSelectionPolicy {
+public final class TestDefaultSelectionPolicy extends BaseTest {
 
   private static QueueSelectionPolicy selectionPolicy;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestAclSelector.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestAclSelector.java
index fbdb108..497f3bb 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestAclSelector.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestAclSelector.java
@@ -22,6 +22,7 @@ import com.typesafe.config.ConfigFactory;
 import com.typesafe.config.ConfigValueFactory;
 import org.apache.drill.categories.ResourceManagerTest;
 import org.apache.drill.exec.resourcemgr.config.exception.RMConfigException;
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -38,7 +39,7 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
 @Category(ResourceManagerTest.class)
-public final class TestAclSelector {
+public final class TestAclSelector extends BaseTest {
 
   private static final List<String> groupsValue = new ArrayList<>();
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestComplexSelectors.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestComplexSelectors.java
index d381d05..bbd5ad9 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestComplexSelectors.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestComplexSelectors.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.ops.QueryContext;
 import org.apache.drill.exec.resourcemgr.config.exception.RMConfigException;
 import org.apache.drill.exec.server.options.OptionValue;
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -41,7 +42,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(ResourceManagerTest.class)
-public final class TestComplexSelectors {
+public final class TestComplexSelectors extends BaseTest {
 
   private static final Map<String, String> tagSelectorConfig1 = new HashMap<>();
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestNotEqualSelector.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestNotEqualSelector.java
index b3053aa..23899b3 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestNotEqualSelector.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestNotEqualSelector.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.ops.QueryContext;
 import org.apache.drill.exec.resourcemgr.config.exception.RMConfigException;
 import org.apache.drill.exec.server.options.OptionValue;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -39,7 +40,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(ResourceManagerTest.class)
-public final class TestNotEqualSelector {
+public final class TestNotEqualSelector extends BaseTest {
 
   private ResourcePoolSelector testCommonHelper(Map<String, ? extends Object> selectorValue) throws RMConfigException {
     Config testConfig = ConfigFactory.empty()
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestResourcePoolSelectors.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestResourcePoolSelectors.java
index 0a2d121..e8695c4 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestResourcePoolSelectors.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestResourcePoolSelectors.java
@@ -21,13 +21,14 @@ import com.typesafe.config.Config;
 import com.typesafe.config.ConfigFactory;
 import org.apache.drill.categories.ResourceManagerTest;
 import org.apache.drill.exec.resourcemgr.config.exception.RMConfigException;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 import static org.junit.Assert.assertTrue;
 
 @Category(ResourceManagerTest.class)
-public class TestResourcePoolSelectors {
+public class TestResourcePoolSelectors extends BaseTest {
 
   @Test
   public void testNullSelectorConfig() throws Exception {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestTagSelector.java b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestTagSelector.java
index d937315..aa0800e 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestTagSelector.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/resourcemgr/config/selectors/TestTagSelector.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.ops.QueryContext;
 import org.apache.drill.exec.resourcemgr.config.exception.RMConfigException;
 import org.apache.drill.exec.server.options.OptionValue;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -35,7 +36,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(ResourceManagerTest.class)
-public final class TestTagSelector {
+public final class TestTagSelector extends BaseTest {
 
   private ResourcePoolSelector testCommonHelper(Object tagValue) throws RMConfigException {
     Config testConfig = ConfigFactory.empty()
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/ConnectionManagerRegistryTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/ConnectionManagerRegistryTest.java
index 667e440..a76f1c4 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/ConnectionManagerRegistryTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/ConnectionManagerRegistryTest.java
@@ -18,6 +18,7 @@
 package org.apache.drill.exec.rpc.control;
 
 import org.apache.drill.exec.proto.CoordinationProtos.DrillbitEndpoint;
+import org.apache.drill.test.BaseTest;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
@@ -25,7 +26,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 import static org.mockito.Mockito.mock;
 
-public class ConnectionManagerRegistryTest {
+public class ConnectionManagerRegistryTest extends BaseTest {
 
   private static final DrillbitEndpoint localEndpoint = DrillbitEndpoint.newBuilder()
     .setAddress("10.0.0.1")
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/TestLocalControlConnectionManager.java b/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/TestLocalControlConnectionManager.java
index c0cf09b..fd9a20f 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/TestLocalControlConnectionManager.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/rpc/control/TestLocalControlConnectionManager.java
@@ -27,6 +27,7 @@ import org.apache.drill.exec.rpc.Acks;
 import org.apache.drill.exec.rpc.RpcException;
 import org.apache.drill.exec.rpc.RpcOutcomeListener;
 import org.apache.drill.exec.work.batch.ControlMessageHandler;
+import org.apache.drill.test.BaseTest;
 import org.hamcrest.Description;
 import org.hamcrest.TypeSafeMatcher;
 import org.junit.Before;
@@ -42,7 +43,7 @@ import static org.junit.Assert.assertTrue;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
-public class TestLocalControlConnectionManager {
+public class TestLocalControlConnectionManager extends BaseTest {
 
   private static final DrillbitEndpoint localEndpoint = DrillbitEndpoint.newBuilder()
     .setAddress("10.0.0.1")
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestFailureUtils.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestFailureUtils.java
index 6736b46..344673a 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestFailureUtils.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestFailureUtils.java
@@ -18,6 +18,7 @@
 package org.apache.drill.exec.server;
 
 import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -25,7 +26,7 @@ import java.io.IOException;
 
 import static org.apache.drill.exec.server.FailureUtils.DIRECT_MEMORY_OOM_MESSAGE;
 
-public class TestFailureUtils {
+public class TestFailureUtils extends BaseTest {
   @Test
   public void testIsDirectMemoryOOM() {
     Assert.assertTrue(FailureUtils.isDirectMemoryOOM(new OutOfMemoryException()));
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/OptionValueTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/OptionValueTest.java
index a5a7bbf..56f4ba6 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/OptionValueTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/OptionValueTest.java
@@ -17,10 +17,11 @@
  */
 package org.apache.drill.exec.server.options;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class OptionValueTest {
+public class OptionValueTest extends BaseTest {
   @Test
   public void createBooleanKindTest() {
     final OptionValue createdValue = OptionValue.create(
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/PersistedOptionValueTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/PersistedOptionValueTest.java
index 9d22121..bf8ae33 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/PersistedOptionValueTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/PersistedOptionValueTest.java
@@ -22,12 +22,13 @@ import com.fasterxml.jackson.annotation.JsonProperty;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import org.apache.drill.common.util.DrillFileUtils;
 import org.apache.drill.exec.serialization.JacksonSerializer;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
 import java.io.IOException;
 
-public class PersistedOptionValueTest {
+public class PersistedOptionValueTest extends BaseTest {
   /**
    * DRILL-5809
    * Note: If this test breaks you are probably breaking backward and forward compatibility. Verify with the community
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/TestConfigLinkage.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/TestConfigLinkage.java
index 9ecf73e..43e01b3 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/TestConfigLinkage.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/options/TestConfigLinkage.java
@@ -22,6 +22,7 @@ import org.apache.drill.categories.SlowTest;
 import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.store.sys.SystemTable;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.test.ClientFixture;
 import org.apache.drill.test.ClusterFixture;
 import org.apache.drill.test.ClusterFixtureBuilder;
@@ -41,7 +42,7 @@ import static org.junit.Assert.assertEquals;
  * */
 
 @Category({OptionsTest.class, SlowTest.class})
-public class TestConfigLinkage {
+public class TestConfigLinkage extends BaseTest {
   public static final String MOCK_PROPERTY = "mock.prop";
 
   @Rule
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/StatusResourcesTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/StatusResourcesTest.java
index 0efc12b..f3abbc7 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/StatusResourcesTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/StatusResourcesTest.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.server.rest;
 import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.server.options.OptionDefinition;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.test.ClientFixture;
 import org.apache.drill.test.ClusterFixture;
 import org.apache.drill.test.ClusterFixtureBuilder;
@@ -31,7 +32,7 @@ import org.junit.Test;
 import static org.apache.drill.exec.server.options.TestConfigLinkage.MOCK_PROPERTY;
 import static org.apache.drill.exec.server.options.TestConfigLinkage.createMockPropOptionDefinition;
 
-public class StatusResourcesTest {
+public class StatusResourcesTest extends BaseTest {
   @Rule
   public final BaseDirTestWatcher dirTestWatcher = new BaseDirTestWatcher();
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/TestMainLoginPageModel.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/TestMainLoginPageModel.java
index 3a45275..29bf97f 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/TestMainLoginPageModel.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/TestMainLoginPageModel.java
@@ -24,6 +24,7 @@ import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.server.DrillbitContext;
 import org.apache.drill.exec.server.rest.LogInLogOutResources.MainLoginPageModel;
 import org.apache.drill.exec.work.WorkManager;
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.Test;
 import org.mockito.InjectMocks;
@@ -36,7 +37,7 @@ import static org.mockito.Mockito.when;
 /**
  * Test for {@link LogInLogOutResources.MainLoginPageModel} with various configurations done in DrillConfig
  */
-public class TestMainLoginPageModel {
+public class TestMainLoginPageModel extends BaseTest {
 
   @Mock
   WorkManager workManager;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/WebSessionResourcesTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/WebSessionResourcesTest.java
index c18f7d8..78cbd1e 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/WebSessionResourcesTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/WebSessionResourcesTest.java
@@ -25,6 +25,7 @@ import io.netty.util.concurrent.GenericFutureListener;
 import org.apache.drill.exec.memory.BufferAllocator;
 import org.apache.drill.exec.rpc.TransportCheck;
 import org.apache.drill.exec.rpc.user.UserSession;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import java.net.SocketAddress;
@@ -40,7 +41,7 @@ import static org.mockito.Mockito.verify;
  * Validates {@link WebSessionResources} close works as expected w.r.t {@link io.netty.channel.AbstractChannel.CloseFuture}
  * associated with it.
  */
-public class WebSessionResourcesTest {
+public class WebSessionResourcesTest extends BaseTest {
   //private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(WebSessionResourcesTest.class);
 
   private WebSessionResources webSessionResources;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestDrillSpnegoAuthenticator.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestDrillSpnegoAuthenticator.java
index f63aee5..efa1974 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestDrillSpnegoAuthenticator.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestDrillSpnegoAuthenticator.java
@@ -30,6 +30,7 @@ import org.apache.drill.exec.server.rest.WebServerConstants;
 import org.apache.drill.exec.server.rest.auth.DrillSpnegoAuthenticator;
 import org.apache.drill.exec.server.rest.auth.DrillSpnegoLoginService;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.security.authentication.util.KerberosName;
 import org.apache.hadoop.security.authentication.util.KerberosUtil;
 import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
@@ -68,7 +69,7 @@ import static org.mockito.Mockito.verify;
  */
 @Ignore("See DRILL-5387")
 @Category(SecurityTest.class)
-public class TestDrillSpnegoAuthenticator {
+public class TestDrillSpnegoAuthenticator extends BaseTest {
 
   private static KerberosHelper spnegoHelper;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoAuthentication.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoAuthentication.java
index e4f82a1..572f32c 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoAuthentication.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoAuthentication.java
@@ -35,6 +35,7 @@ import org.apache.drill.exec.server.options.SystemOptionManager;
 import org.apache.drill.exec.server.rest.auth.DrillHttpSecurityHandlerProvider;
 import org.apache.drill.exec.server.rest.auth.DrillSpnegoLoginService;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.security.authentication.util.KerberosName;
 import org.apache.hadoop.security.authentication.util.KerberosUtil;
 import org.apache.kerby.kerberos.kerb.client.JaasKrbUtil;
@@ -65,7 +66,7 @@ import static org.junit.Assert.assertTrue;
  */
 @Ignore("See DRILL-5387")
 @Category(SecurityTest.class)
-public class TestSpnegoAuthentication {
+public class TestSpnegoAuthentication extends BaseTest {
 
   private static KerberosHelper spnegoHelper;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoConfig.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoConfig.java
index ef6e46c..3831f50 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoConfig.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/rest/spnego/TestSpnegoConfig.java
@@ -27,6 +27,7 @@ import org.apache.drill.exec.rpc.security.KerberosHelper;
 import org.apache.drill.exec.rpc.user.security.testing.UserAuthenticatorTestImpl;
 import org.apache.drill.exec.server.rest.auth.SpnegoConfig;
 import org.apache.drill.test.BaseDirTestWatcher;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.util.KerberosName;
 import org.apache.hadoop.security.authentication.util.KerberosUtil;
@@ -47,7 +48,7 @@ import static org.junit.Assert.assertTrue;
  */
 @Ignore("See DRILL-5387")
 @Category(SecurityTest.class)
-public class TestSpnegoConfig {
+public class TestSpnegoConfig extends BaseTest {
   //private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(TestSpnegoConfig.class);
 
   private static KerberosHelper spnegoHelper;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestSqlBracketlessSyntax.java b/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestSqlBracketlessSyntax.java
index cc21b43..806d4a5 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestSqlBracketlessSyntax.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestSqlBracketlessSyntax.java
@@ -28,6 +28,7 @@ import org.apache.drill.exec.planner.physical.PlannerSettings;
 import org.apache.drill.exec.planner.sql.DrillConvertletTable;
 import org.apache.drill.exec.planner.sql.parser.CompoundIdentifierConverter;
 import org.apache.drill.exec.planner.sql.parser.impl.DrillParserImpl;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.test.DrillAssert;
 import org.apache.calcite.sql.SqlNode;
 import org.apache.calcite.sql.parser.SqlParser;
@@ -35,7 +36,7 @@ import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(SqlTest.class)
-public class TestSqlBracketlessSyntax {
+public class TestSqlBracketlessSyntax extends BaseTest {
   static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(TestSqlBracketlessSyntax.class);
 
   @Test
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/StorageStrategyTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/StorageStrategyTest.java
index 11102f9..14968ae 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/StorageStrategyTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/StorageStrategyTest.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.store;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.drill.shaded.guava.com.google.common.io.Files;
 import org.apache.drill.exec.ExecTest;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
@@ -33,7 +34,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
-public class StorageStrategyTest {
+public class StorageStrategyTest extends BaseTest {
   private static final FsPermission FULL_PERMISSION = FsPermission.getDirDefault();
   private static final StorageStrategy PERSISTENT_STRATEGY = new StorageStrategy("002", false);
   private static final StorageStrategy TEMPORARY_STRATEGY = new StorageStrategy("077", true);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java
index c3ff89a..4a77fbe 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/bson/TestBsonRecordReader.java
@@ -32,6 +32,7 @@ import org.apache.drill.exec.store.TestOutputMutator;
 import org.apache.drill.exec.vector.complex.impl.SingleMapReaderImpl;
 import org.apache.drill.exec.vector.complex.impl.VectorContainerWriter;
 import org.apache.drill.exec.vector.complex.reader.FieldReader;
+import org.apache.drill.test.BaseTest;
 import org.bson.BsonBinary;
 import org.bson.BsonBinarySubType;
 import org.bson.BsonBoolean;
@@ -52,7 +53,7 @@ import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
-public class TestBsonRecordReader {
+public class TestBsonRecordReader extends BaseTest {
   private BufferAllocator allocator;
   private VectorContainerWriter writer;
   private TestOutputMutator mutator;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestDrillFileSystem.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestDrillFileSystem.java
index 5504382..ec841a3 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestDrillFileSystem.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestDrillFileSystem.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.store.dfs;
 import org.apache.drill.exec.ops.OpProfileDef;
 import org.apache.drill.exec.ops.OperatorStats;
 import org.apache.drill.exec.proto.UserBitShared.OperatorProfile;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -33,7 +34,7 @@ import java.io.PrintWriter;
 
 import static org.junit.Assert.assertTrue;
 
-public class TestDrillFileSystem {
+public class TestDrillFileSystem extends BaseTest {
 
   private static String tempFilePath;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestFormatPluginOptionExtractor.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestFormatPluginOptionExtractor.java
index 7721dc7..871233b 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestFormatPluginOptionExtractor.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/dfs/TestFormatPluginOptionExtractor.java
@@ -23,6 +23,7 @@ import org.apache.drill.common.scanner.RunTimeScan;
 import org.apache.drill.common.scanner.persistence.ScanResult;
 import org.apache.drill.exec.store.easy.text.TextFormatPlugin.TextFormatConfig;
 import org.apache.drill.exec.store.image.ImageFormatConfig;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import java.util.Collection;
@@ -31,7 +32,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.fail;
 
 
-public class TestFormatPluginOptionExtractor {
+public class TestFormatPluginOptionExtractor extends BaseTest {
 
   @Test
   public void test() {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestComplexColumnInSchema.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestComplexColumnInSchema.java
index ef6fdd4..6d0e785 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestComplexColumnInSchema.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestComplexColumnInSchema.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.store.parquet;
 import org.apache.drill.categories.ParquetTest;
 import org.apache.drill.categories.UnlikelyTest;
 import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 import org.apache.parquet.hadoop.metadata.ParquetMetadata;
@@ -40,7 +41,7 @@ import java.io.IOException;
  * This test checks correctness of complex column detection in the Parquet file schema.
  */
 @Category({ParquetTest.class, UnlikelyTest.class})
-public class TestComplexColumnInSchema {
+public class TestComplexColumnInSchema extends BaseTest {
 
   /*
   Parquet schema:
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetMetadataVersion.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetMetadataVersion.java
index d2fce64..f880126 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetMetadataVersion.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetMetadataVersion.java
@@ -21,6 +21,7 @@ import org.apache.drill.categories.ParquetTest;
 import org.apache.drill.categories.UnlikelyTest;
 import org.apache.drill.common.exceptions.DrillRuntimeException;
 import org.apache.drill.exec.store.parquet.metadata.MetadataVersion;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -28,7 +29,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 @Category({ParquetTest.class, UnlikelyTest.class})
-public class TestParquetMetadataVersion {
+public class TestParquetMetadataVersion extends BaseTest {
 
   @Test
   public void testFirstLetter() throws Exception {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderConfig.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderConfig.java
index 0c0dea5..69f9666 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderConfig.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderConfig.java
@@ -23,6 +23,7 @@ import org.apache.drill.categories.UnlikelyTest;
 import org.apache.drill.common.config.DrillConfig;
 import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.server.options.SystemOptionManager;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.parquet.ParquetReadOptions;
 import org.junit.Test;
@@ -34,7 +35,7 @@ import static org.junit.Assert.assertNotNull;
 import static org.junit.Assert.assertTrue;
 
 @Category({ParquetTest.class, UnlikelyTest.class})
-public class TestParquetReaderConfig {
+public class TestParquetReaderConfig extends BaseTest {
 
   @Test
   public void testDefaultsDeserialization() throws Exception {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderUtility.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderUtility.java
index f93869c..ee4fec5 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderUtility.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/parquet/TestParquetReaderUtility.java
@@ -19,6 +19,7 @@ package org.apache.drill.exec.store.parquet;
 
 import org.apache.drill.categories.ParquetTest;
 import org.apache.drill.categories.UnlikelyTest;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
 
@@ -38,7 +39,7 @@ import java.io.IOException;
 import java.util.Map;
 
 @Category({ParquetTest.class, UnlikelyTest.class})
-public class TestParquetReaderUtility {
+public class TestParquetReaderUtility extends BaseTest {
 
   private static final String path = "src/test/resources/store/parquet/complex/complex.parquet";
   private static ParquetMetadata footer;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/store/store/TestAssignment.java b/exec/java-exec/src/test/java/org/apache/drill/exec/store/store/TestAssignment.java
index 1297967..853f2ce 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/store/store/TestAssignment.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/store/store/TestAssignment.java
@@ -26,6 +26,7 @@ import org.apache.drill.exec.store.schedule.AssignmentCreator;
 import org.apache.drill.exec.store.schedule.CompleteFileWork;
 import org.apache.drill.exec.store.schedule.EndpointByteMap;
 import org.apache.drill.exec.store.schedule.EndpointByteMapImpl;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.fs.Path;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -36,7 +37,7 @@ import java.util.List;
 import java.util.Set;
 import java.util.concurrent.ThreadLocalRandom;
 
-public class TestAssignment {
+public class TestAssignment extends BaseTest {
 
   private static final long FILE_SIZE = 1000;
   private static List<DrillbitEndpoint> endpoints;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/test/Drill2130JavaExecHamcrestConfigurationTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/test/Drill2130JavaExecHamcrestConfigurationTest.java
index fa599bd..e3eba89 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/test/Drill2130JavaExecHamcrestConfigurationTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/test/Drill2130JavaExecHamcrestConfigurationTest.java
@@ -21,10 +21,11 @@ import static org.hamcrest.CoreMatchers.equalTo;
 import static org.junit.Assert.assertThat;
 import static org.junit.Assert.fail;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 
-public class Drill2130JavaExecHamcrestConfigurationTest {
+public class Drill2130JavaExecHamcrestConfigurationTest extends BaseTest {
 
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory
       .getLogger(Drill2130JavaExecHamcrestConfigurationTest.class);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/util/DrillExceptionUtilTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/util/DrillExceptionUtilTest.java
index a2cb29a..70679a1 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/util/DrillExceptionUtilTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/util/DrillExceptionUtilTest.java
@@ -22,11 +22,12 @@ import org.apache.drill.common.exceptions.DrillRuntimeException;
 import org.apache.drill.common.exceptions.ErrorHelper;
 import org.apache.drill.common.util.DrillExceptionUtil;
 import org.apache.drill.exec.proto.UserBitShared;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import static org.junit.Assert.assertEquals;
 
-public class DrillExceptionUtilTest {
+public class DrillExceptionUtilTest extends BaseTest {
   private static final String ERROR_MESSAGE = "Exception Test";
   private static final String NESTED_ERROR_MESSAGE = "Nested Exception";
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/util/FileSystemUtilTestBase.java b/exec/java-exec/src/test/java/org/apache/drill/exec/util/FileSystemUtilTestBase.java
index 5081bfd..20dde2a 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/util/FileSystemUtilTestBase.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/util/FileSystemUtilTestBase.java
@@ -21,6 +21,7 @@ import org.apache.drill.shaded.guava.com.google.common.base.Strings;
 import org.apache.drill.shaded.guava.com.google.common.io.Files;
 import org.apache.commons.io.FileUtils;
 import org.apache.drill.exec.ExecTest;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.junit.BeforeClass;
@@ -33,7 +34,7 @@ import java.util.Arrays;
  * Base test class for file system util classes that will during test initialization
  * setup file system connection and create directories and files needed for unit tests.
  */
-public class FileSystemUtilTestBase {
+public class FileSystemUtilTestBase extends BaseTest {
 
   /*
     Directory and file structure created during test initialization:
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestApproximateStringMatcher.java b/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestApproximateStringMatcher.java
index 215dba9..b3b0a91 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestApproximateStringMatcher.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestApproximateStringMatcher.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.exec.util;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import java.util.ArrayList;
@@ -24,7 +25,7 @@ import java.util.List;
 
 import static org.junit.Assert.assertEquals;
 
-public class TestApproximateStringMatcher {
+public class TestApproximateStringMatcher extends BaseTest {
     @Test
     public void testStringMatcher() {
         List<String> names = new ArrayList<>();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestArrayWrappedIntIntMap.java b/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestArrayWrappedIntIntMap.java
index da2af9c..b877705 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestArrayWrappedIntIntMap.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestArrayWrappedIntIntMap.java
@@ -17,11 +17,12 @@
  */
 package org.apache.drill.exec.util;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import static org.junit.Assert.assertEquals;
 
-public class TestArrayWrappedIntIntMap {
+public class TestArrayWrappedIntIntMap extends BaseTest {
   @Test
   public void testSimple() {
     ArrayWrappedIntIntMap map = new ArrayWrappedIntIntMap();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestValueVectorElementFormatter.java b/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestValueVectorElementFormatter.java
index 8e7df54..448c281 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestValueVectorElementFormatter.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/util/TestValueVectorElementFormatter.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.util;
 import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.server.options.OptionManager;
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.Test;
 import org.mockito.Mock;
@@ -34,7 +35,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNull;
 import static org.mockito.Mockito.when;
 
-public class TestValueVectorElementFormatter {
+public class TestValueVectorElementFormatter extends BaseTest {
 
   @Mock
   private OptionManager options;
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/TestSplitAndTransfer.java b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/TestSplitAndTransfer.java
index d8a9e03..e1b8dbc 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/TestSplitAndTransfer.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/TestSplitAndTransfer.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.memory.RootAllocatorFactory;
 import org.apache.drill.exec.record.MaterializedField;
 import org.apache.drill.exec.record.TransferPair;
 import org.apache.drill.exec.vector.NullableVarCharVector.Accessor;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import static org.junit.Assert.assertEquals;
@@ -33,7 +34,7 @@ import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.assertArrayEquals;
 
 
-public class TestSplitAndTransfer {
+public class TestSplitAndTransfer extends BaseTest {
   @Test
   public void test() throws Exception {
     final DrillConfig drillConfig = DrillConfig.create();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/GenericAccessorTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/GenericAccessorTest.java
index c70765f..57a0191 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/GenericAccessorTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/GenericAccessorTest.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.vector.accessor;
 import org.apache.drill.categories.VectorTest;
 import org.apache.drill.exec.proto.UserBitShared;
 import org.apache.drill.exec.vector.ValueVector;
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -35,7 +36,7 @@ import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
 @Category(VectorTest.class)
-public class GenericAccessorTest {
+public class GenericAccessorTest extends BaseTest {
 
   public static final Object NON_NULL_VALUE = "Non-null value";
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/TestTimePrintMillis.java b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/TestTimePrintMillis.java
index ab6aba7..06dbfff 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/TestTimePrintMillis.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/accessor/TestTimePrintMillis.java
@@ -18,10 +18,11 @@
 package org.apache.drill.exec.vector.accessor;
 
 import org.apache.drill.exec.vector.accessor.sql.TimePrintMillis;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
-public class TestTimePrintMillis {
+public class TestTimePrintMillis extends BaseTest {
 
   @Test
   public void testPrintingMillis() {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestPromotableWriter.java b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestPromotableWriter.java
index cbe5a58..2500c60 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestPromotableWriter.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestPromotableWriter.java
@@ -25,9 +25,10 @@ import org.apache.drill.exec.util.BatchPrinter;
 import org.apache.drill.exec.vector.complex.impl.VectorContainerWriter;
 import org.apache.drill.exec.vector.complex.writer.BaseWriter.ComplexWriter;
 import org.apache.drill.exec.vector.complex.writer.BaseWriter.MapWriter;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
-public class TestPromotableWriter {
+public class TestPromotableWriter extends BaseTest {
 
   @Test
   public void list() throws Exception {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestRepeated.java b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestRepeated.java
index 6785cf6..5a29f2a 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestRepeated.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/vector/complex/writer/TestRepeated.java
@@ -31,6 +31,7 @@ import org.apache.drill.exec.vector.complex.impl.ComplexWriterImpl;
 import org.apache.drill.exec.vector.complex.reader.FieldReader;
 import org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter;
 import org.apache.drill.exec.vector.complex.writer.BaseWriter.MapWriter;
+import org.apache.drill.test.BaseTest;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -38,7 +39,7 @@ import org.junit.Test;
 import com.fasterxml.jackson.databind.ObjectMapper;
 import com.fasterxml.jackson.databind.ObjectWriter;
 
-public class TestRepeated {
+public class TestRepeated extends BaseTest {
   // private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(TestRepeated.class);
 
   private static final DrillConfig drillConfig = DrillConfig.create();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/work/filter/BloomFilterTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/work/filter/BloomFilterTest.java
index 44ea06a..761a2cc 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/work/filter/BloomFilterTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/work/filter/BloomFilterTest.java
@@ -42,11 +42,12 @@ import org.apache.drill.exec.server.Drillbit;
 import org.apache.drill.exec.server.DrillbitContext;
 import org.apache.drill.exec.server.RemoteServiceSet;
 import org.apache.drill.exec.vector.VarCharVector;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 import java.util.Iterator;
 
-public class BloomFilterTest {
+public class BloomFilterTest extends BaseTest {
   public static DrillConfig c = DrillConfig.create();
 
   class TestRecordBatch implements RecordBatch {
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/work/fragment/FragmentStatusReporterTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/work/fragment/FragmentStatusReporterTest.java
index a119a8c..b4bc9cc 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/work/fragment/FragmentStatusReporterTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/work/fragment/FragmentStatusReporterTest.java
@@ -26,6 +26,7 @@ import org.apache.drill.exec.proto.ExecProtos.FragmentHandle;
 import org.apache.drill.exec.proto.UserBitShared.FragmentState;
 import org.apache.drill.exec.rpc.control.ControlTunnel;
 import org.apache.drill.exec.rpc.control.Controller;
+import org.apache.drill.test.BaseTest;
 import org.junit.Before;
 import org.junit.Test;
 
@@ -40,7 +41,7 @@ import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.verifyZeroInteractions;
 import static org.mockito.Mockito.when;
 
-public class FragmentStatusReporterTest {
+public class FragmentStatusReporterTest extends BaseTest {
 
   private FragmentStatusReporter statusReporter;
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/test/ExampleTest.java b/exec/java-exec/src/test/java/org/apache/drill/test/ExampleTest.java
index 2a24002..40cb1f5 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/test/ExampleTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/test/ExampleTest.java
@@ -66,7 +66,7 @@ import ch.qos.logback.classic.Level;
 // real test.
 
 @Ignore
-public class ExampleTest {
+public class ExampleTest extends BaseTest {
   static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(ExampleTest.class);
 
   /**
diff --git a/exec/java-exec/src/test/java/org/apache/drill/test/rowSet/test/TestRowSetComparison.java b/exec/java-exec/src/test/java/org/apache/drill/test/rowSet/test/TestRowSetComparison.java
index e9f1590..0326612 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/test/rowSet/test/TestRowSetComparison.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/test/rowSet/test/TestRowSetComparison.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.record.metadata.SchemaBuilder;
 import org.apache.drill.exec.record.metadata.TupleMetadata;
 import org.apache.drill.exec.physical.rowSet.RowSet;
 import org.apache.drill.exec.physical.rowSet.RowSetBuilder;
+import org.apache.drill.test.BaseTest;
 import org.apache.drill.test.rowSet.RowSetComparison;
 import org.junit.After;
 import org.junit.Before;
@@ -32,7 +33,7 @@ import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(RowSetTests.class)
-public class TestRowSetComparison {
+public class TestRowSetComparison extends BaseTest {
   private BufferAllocator allocator;
 
   @Before
diff --git a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
index c343037..99f399d 100644
--- a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
+++ b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.jdbc;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.ClassRule;
 import org.junit.Test;
 import org.junit.rules.TestWatcher;
@@ -44,7 +45,7 @@ import java.util.stream.Collectors;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
 
-public class ITTestShadedJar {
+public class ITTestShadedJar extends BaseTest {
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(ITTestShadedJar.class);
 
   private static DrillbitClassLoader drillbitLoader;
diff --git a/exec/jdbc/src/test/java/org/apache/drill/jdbc/ConnectionTransactionMethodsTest.java b/exec/jdbc/src/test/java/org/apache/drill/jdbc/ConnectionTransactionMethodsTest.java
index ffc5b68..26eda3a 100644
--- a/exec/jdbc/src/test/java/org/apache/drill/jdbc/ConnectionTransactionMethodsTest.java
+++ b/exec/jdbc/src/test/java/org/apache/drill/jdbc/ConnectionTransactionMethodsTest.java
@@ -26,6 +26,7 @@ import static org.hamcrest.CoreMatchers.containsString;
 import static org.hamcrest.CoreMatchers.equalTo;
 import static org.junit.Assert.assertThat;
 import org.apache.drill.categories.JdbcTest;
+import org.apache.drill.test.BaseTest;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -41,7 +42,7 @@ import java.sql.SQLException;
  * methods.
  */
 @Category(JdbcTest.class)
-public class ConnectionTransactionMethodsTest {
+public class ConnectionTransactionMethodsTest extends BaseTest {
 
   private static Connection connection;
 
diff --git a/exec/jdbc/src/test/java/org/apache/drill/jdbc/DatabaseMetaDataTest.java b/exec/jdbc/src/test/java/org/apache/drill/jdbc/DatabaseMetaDataTest.java
index cd9803b..9ee39b3 100644
--- a/exec/jdbc/src/test/java/org/apache/drill/jdbc/DatabaseMetaDataTest.java
+++ b/exec/jdbc/src/test/java/org/apache/drill/jdbc/DatabaseMetaDataTest.java
@@ -34,6 +34,7 @@ import java.sql.DatabaseMetaData;
 import java.sql.SQLException;
 
 import org.apache.drill.categories.JdbcTest;
+import org.apache.drill.test.BaseTest;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -45,7 +46,7 @@ import org.junit.experimental.categories.Category;
  * {@link DatabaseMetaDataGetColumnsTest})).
  */
 @Category(JdbcTest.class)
-public class DatabaseMetaDataTest {
+public class DatabaseMetaDataTest extends BaseTest {
 
   protected static Connection connection;
   protected static DatabaseMetaData dbmd;
diff --git a/exec/jdbc/src/test/java/org/apache/drill/jdbc/DrillColumnMetaDataListTest.java b/exec/jdbc/src/test/java/org/apache/drill/jdbc/DrillColumnMetaDataListTest.java
index f4e0363..a67d067 100644
--- a/exec/jdbc/src/test/java/org/apache/drill/jdbc/DrillColumnMetaDataListTest.java
+++ b/exec/jdbc/src/test/java/org/apache/drill/jdbc/DrillColumnMetaDataListTest.java
@@ -37,6 +37,7 @@ import org.apache.drill.exec.record.BatchSchema;
 import org.apache.drill.exec.record.MaterializedField;
 import org.apache.drill.jdbc.impl.DrillColumnMetaDataList;
 import org.apache.drill.categories.JdbcTest;
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
@@ -46,7 +47,7 @@ import org.mockito.invocation.InvocationOnMock;
 import org.mockito.stubbing.Answer;
 
 @Category(JdbcTest.class)
-public class DrillColumnMetaDataListTest {
+public class DrillColumnMetaDataListTest extends BaseTest {
 
   private DrillColumnMetaDataList emptyList;
 
diff --git a/exec/jdbc/src/test/java/org/apache/drill/jdbc/impl/TypeConvertingSqlAccessorTest.java b/exec/jdbc/src/test/java/org/apache/drill/jdbc/impl/TypeConvertingSqlAccessorTest.java
index 1d81267..d10e28f 100644
--- a/exec/jdbc/src/test/java/org/apache/drill/jdbc/impl/TypeConvertingSqlAccessorTest.java
+++ b/exec/jdbc/src/test/java/org/apache/drill/jdbc/impl/TypeConvertingSqlAccessorTest.java
@@ -25,6 +25,7 @@ import org.apache.drill.exec.vector.accessor.InvalidAccessException;
 import org.apache.drill.exec.vector.accessor.SqlAccessor;
 import org.apache.drill.jdbc.SQLConversionOverflowException;
 import org.apache.drill.categories.JdbcTest;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -39,7 +40,7 @@ import static org.junit.Assert.assertThat;
  * (Also see {@link org.apache.drill.jdbc.ResultSetGetMethodConversionsTest}.
  */
 @Category(JdbcTest.class)
-public class TypeConvertingSqlAccessorTest {
+public class TypeConvertingSqlAccessorTest extends BaseTest {
 
   /**
    * Base test stub(?) for accessors underlying TypeConvertingSqlAccessor.
diff --git a/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2130JavaJdbcHamcrestConfigurationTest.java b/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2130JavaJdbcHamcrestConfigurationTest.java
index 3adfbf6..b1c32c6 100644
--- a/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2130JavaJdbcHamcrestConfigurationTest.java
+++ b/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2130JavaJdbcHamcrestConfigurationTest.java
@@ -18,6 +18,7 @@
 package org.apache.drill.jdbc.test;
 
 import org.apache.drill.categories.JdbcTest;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -26,7 +27,7 @@ import static org.junit.Assert.fail;
 import static org.hamcrest.CoreMatchers.equalTo;
 
 @Category(JdbcTest.class)
-public class Drill2130JavaJdbcHamcrestConfigurationTest {
+public class Drill2130JavaJdbcHamcrestConfigurationTest extends BaseTest {
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(Drill2130JavaJdbcHamcrestConfigurationTest.class);
 
   @SuppressWarnings("unused")
diff --git a/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2288GetColumnsMetadataWhenNoRowsTest.java b/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2288GetColumnsMetadataWhenNoRowsTest.java
index c98059e..94ef471 100644
--- a/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2288GetColumnsMetadataWhenNoRowsTest.java
+++ b/exec/jdbc/src/test/java/org/apache/drill/jdbc/test/Drill2288GetColumnsMetadataWhenNoRowsTest.java
@@ -31,6 +31,7 @@ import java.sql.Statement;
 import org.apache.drill.jdbc.Driver;
 import org.apache.drill.categories.JdbcTest;
 import org.apache.drill.jdbc.JdbcTestBase;
+import org.apache.drill.test.BaseTest;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category;
  * scan yielded an empty (zero-row) result set.
  */
 @Category(JdbcTest.class)
-public class Drill2288GetColumnsMetadataWhenNoRowsTest {
+public class Drill2288GetColumnsMetadataWhenNoRowsTest extends BaseTest {
   private static Connection connection;
 
   @BeforeClass
diff --git a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/BoundsCheckingTest.java b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/BoundsCheckingTest.java
index 8021228..3b5c7ee 100644
--- a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/BoundsCheckingTest.java
+++ b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/BoundsCheckingTest.java
@@ -20,6 +20,7 @@ package org.apache.drill.exec.memory;
 import java.lang.reflect.Field;
 import java.lang.reflect.Modifier;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.After;
 import org.junit.AfterClass;
 import org.junit.Before;
@@ -31,8 +32,7 @@ import io.netty.util.IllegalReferenceCountException;
 
 import static org.junit.Assert.fail;
 
-public class BoundsCheckingTest
-{
+public class BoundsCheckingTest extends BaseTest {
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(BoundsCheckingTest.class);
 
   private static boolean old;
diff --git a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestAccountant.java b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestAccountant.java
index d977f96..c359404 100644
--- a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestAccountant.java
+++ b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestAccountant.java
@@ -21,12 +21,13 @@ import static org.junit.Assert.assertEquals;
 
 import org.apache.drill.categories.MemoryTest;
 import org.apache.drill.exec.memory.Accountant.AllocationOutcome;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(MemoryTest.class)
-public class TestAccountant {
+public class TestAccountant extends BaseTest {
 
   @Test
   public void basic() {
diff --git a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestBaseAllocator.java b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestBaseAllocator.java
index 3faa7ea..827eb9c 100644
--- a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestBaseAllocator.java
+++ b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestBaseAllocator.java
@@ -27,12 +27,13 @@ import io.netty.buffer.DrillBuf.TransferResult;
 
 import org.apache.drill.categories.MemoryTest;
 import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.test.BaseTest;
 import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(MemoryTest.class)
-public class TestBaseAllocator {
+public class TestBaseAllocator extends BaseTest {
   // private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(TestBaseAllocator.class);
 
   private final static int MAX_ALLOCATION = 8 * 1024;
diff --git a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestEndianess.java b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestEndianess.java
index 82a91a7..9d2fd76 100644
--- a/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestEndianess.java
+++ b/exec/memory/base/src/test/java/org/apache/drill/exec/memory/TestEndianess.java
@@ -23,11 +23,12 @@ import io.netty.buffer.ByteBuf;
 import org.apache.drill.categories.MemoryTest;
 import org.apache.drill.common.DrillAutoCloseables;
 import org.apache.drill.common.config.DrillConfig;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(MemoryTest.class)
-public class TestEndianess {
+public class TestEndianess extends BaseTest {
 
   @Test
   public void testLittleEndian() {
diff --git a/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/TestMetadataProperties.java b/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/TestMetadataProperties.java
index 4834ae7..6ffd170 100644
--- a/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/TestMetadataProperties.java
+++ b/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/TestMetadataProperties.java
@@ -30,11 +30,12 @@ import org.apache.drill.categories.RowSetTests;
 import org.apache.drill.common.types.TypeProtos.DataMode;
 import org.apache.drill.common.types.TypeProtos.MinorType;
 import org.apache.drill.exec.expr.BasicTypeHelper;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
 @Category(RowSetTests.class)
-public class TestMetadataProperties {
+public class TestMetadataProperties extends BaseTest {
 
   @Test
   public void testBasics() {
diff --git a/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestParserErrorHandling.java b/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestParserErrorHandling.java
index 6bd730b..1d5a84b 100644
--- a/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestParserErrorHandling.java
+++ b/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestParserErrorHandling.java
@@ -17,13 +17,14 @@
  */
 package org.apache.drill.exec.record.metadata.schema.parser;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Rule;
 import org.junit.Test;
 import org.junit.rules.ExpectedException;
 
 import java.io.IOException;
 
-public class TestParserErrorHandling {
+public class TestParserErrorHandling extends BaseTest {
 
   @Rule
   public ExpectedException thrown = ExpectedException.none();
diff --git a/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestSchemaParser.java b/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestSchemaParser.java
index 5bb6cfb..0a6bb78 100644
--- a/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestSchemaParser.java
+++ b/exec/vector/src/test/java/org/apache/drill/exec/record/metadata/schema/parser/TestSchemaParser.java
@@ -21,6 +21,7 @@ import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.exec.record.metadata.ColumnMetadata;
 import org.apache.drill.exec.record.metadata.SchemaBuilder;
 import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.test.BaseTest;
 import org.joda.time.LocalDate;
 import org.junit.Test;
 
@@ -35,7 +36,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
-public class TestSchemaParser {
+public class TestSchemaParser extends BaseTest {
 
   @Test
   public void checkQuotedIdWithEscapes() throws Exception {
diff --git a/exec/vector/src/test/java/org/apache/drill/exec/vector/VariableLengthVectorTest.java b/exec/vector/src/test/java/org/apache/drill/exec/vector/VariableLengthVectorTest.java
index 05fb66a..96cf102 100644
--- a/exec/vector/src/test/java/org/apache/drill/exec/vector/VariableLengthVectorTest.java
+++ b/exec/vector/src/test/java/org/apache/drill/exec/vector/VariableLengthVectorTest.java
@@ -21,14 +21,14 @@ import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.common.types.Types;
 import org.apache.drill.exec.memory.RootAllocator;
 import org.apache.drill.exec.record.MaterializedField;
+import org.apache.drill.test.BaseTest;
 import org.junit.Assert;
 import org.junit.Test;
 
 /**
  * This test uses {@link VarCharVector} to test the template code in VariableLengthVector.
  */
-public class VariableLengthVectorTest
-{
+public class VariableLengthVectorTest extends BaseTest {
   /**
    * If the vector contains 1000 records, setting a value count of 1000 should work.
    */
diff --git a/exec/vector/src/test/java/org/apache/drill/exec/vector/VectorTest.java b/exec/vector/src/test/java/org/apache/drill/exec/vector/VectorTest.java
index d633a79..a3993c9 100644
--- a/exec/vector/src/test/java/org/apache/drill/exec/vector/VectorTest.java
+++ b/exec/vector/src/test/java/org/apache/drill/exec/vector/VectorTest.java
@@ -34,13 +34,14 @@ import org.apache.drill.exec.vector.complex.writer.BaseWriter.ListWriter;
 import org.apache.drill.exec.vector.complex.writer.BaseWriter.MapWriter;
 import org.apache.drill.exec.vector.complex.writer.FieldWriter;
 import org.apache.drill.exec.vector.complex.writer.IntWriter;
+import org.apache.drill.test.BaseTest;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import io.netty.buffer.DrillBuf;
 
-public class VectorTest {
+public class VectorTest extends BaseTest {
 
   private static RootAllocator allocator;
 
diff --git a/logical/src/test/java/org/apache/drill/common/expression/SchemaPathTest.java b/logical/src/test/java/org/apache/drill/common/expression/SchemaPathTest.java
index 6a820ef..3bc5ec5 100644
--- a/logical/src/test/java/org/apache/drill/common/expression/SchemaPathTest.java
+++ b/logical/src/test/java/org/apache/drill/common/expression/SchemaPathTest.java
@@ -17,11 +17,12 @@
  */
 package org.apache.drill.common.expression;
 
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import static org.junit.Assert.assertEquals;
 
-public class SchemaPathTest {
+public class SchemaPathTest extends BaseTest {
 
   @Test
   public void testUnIndexedWithOutArray() {
diff --git a/logical/src/test/java/org/apache/drill/common/expression/fn/JodaDateValidatorTest.java b/logical/src/test/java/org/apache/drill/common/expression/fn/JodaDateValidatorTest.java
index cb23da2..16df588 100644
--- a/logical/src/test/java/org/apache/drill/common/expression/fn/JodaDateValidatorTest.java
+++ b/logical/src/test/java/org/apache/drill/common/expression/fn/JodaDateValidatorTest.java
@@ -18,6 +18,7 @@
 package org.apache.drill.common.expression.fn;
 
 import org.apache.drill.shaded.guava.com.google.common.collect.Maps;
+import org.apache.drill.test.BaseTest;
 import org.joda.time.DateTime;
 import org.joda.time.DateTimeZone;
 import org.joda.time.format.DateTimeFormatter;
@@ -30,7 +31,7 @@ import static org.apache.drill.common.expression.fn.JodaDateValidator.toJodaForm
 import static org.joda.time.DateTime.parse;
 import static org.joda.time.format.DateTimeFormat.forPattern;
 
-public class JodaDateValidatorTest {
+public class JodaDateValidatorTest extends BaseTest {
 
   private static final Map<String, String> TEST_CASES = Maps.newHashMap();
 
diff --git a/logical/src/test/java/org/apache/drill/common/logical/data/OrderTest.java b/logical/src/test/java/org/apache/drill/common/logical/data/OrderTest.java
index 8242076..9bfda8b 100644
--- a/logical/src/test/java/org/apache/drill/common/logical/data/OrderTest.java
+++ b/logical/src/test/java/org/apache/drill/common/logical/data/OrderTest.java
@@ -25,11 +25,12 @@ import org.apache.drill.common.logical.data.Order.Ordering;
 import org.apache.calcite.rel.RelFieldCollation;
 import org.apache.calcite.rel.RelFieldCollation.Direction;
 import org.apache.calcite.rel.RelFieldCollation.NullDirection;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 
 import static org.hamcrest.CoreMatchers.equalTo;
 
-public class OrderTest {
+public class OrderTest extends BaseTest {
 
   //////////
   // Order.Ordering tests:
diff --git a/metastore/iceberg-metastore/src/test/java/org/apache/drill/metastore/iceberg/IcebergBaseTest.java b/metastore/iceberg-metastore/src/test/java/org/apache/drill/metastore/iceberg/IcebergBaseTest.java
index 5ef7b80..51db136 100644
--- a/metastore/iceberg-metastore/src/test/java/org/apache/drill/metastore/iceberg/IcebergBaseTest.java
+++ b/metastore/iceberg-metastore/src/test/java/org/apache/drill/metastore/iceberg/IcebergBaseTest.java
@@ -21,12 +21,11 @@ import com.typesafe.config.Config;
 import com.typesafe.config.ConfigValueFactory;
 import org.apache.drill.categories.MetastoreTest;
 import org.apache.drill.common.config.DrillConfig;
-import org.apache.drill.common.util.GuavaPatcher;
 import org.apache.drill.metastore.iceberg.config.IcebergConfigConstants;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
-import org.junit.BeforeClass;
 import org.junit.ClassRule;
 import org.junit.Rule;
 import org.junit.experimental.categories.Category;
@@ -36,7 +35,7 @@ import org.junit.rules.TemporaryFolder;
 import java.io.File;
 
 @Category(MetastoreTest.class)
-public abstract class IcebergBaseTest {
+public abstract class IcebergBaseTest extends BaseTest {
 
   @ClassRule
   public static TemporaryFolder defaultFolder = new TemporaryFolder();
@@ -44,12 +43,6 @@ public abstract class IcebergBaseTest {
   @Rule
   public ExpectedException thrown = ExpectedException.none();
 
-  @BeforeClass
-  public static void setup() {
-    // patches Guava Preconditions class with missing methods
-    GuavaPatcher.patch();
-  }
-
   /**
    * Creates Hadoop configuration and sets local file system as default.
    *
diff --git a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesRequests.java b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesRequests.java
index 2d95255..8278816 100644
--- a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesRequests.java
+++ b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesRequests.java
@@ -20,6 +20,7 @@ package org.apache.drill.metastore.components.tables;
 import org.apache.drill.categories.MetastoreTest;
 import org.apache.drill.metastore.expressions.FilterExpression;
 import org.apache.drill.metastore.metadata.MetadataInfo;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -32,7 +33,7 @@ import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 
 @Category(MetastoreTest.class)
-public class TestBasicTablesRequests {
+public class TestBasicTablesRequests extends BaseTest {
 
   @Test
   public void testRequestMetadataWithoutRequestColumns() {
diff --git a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesTransformer.java b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesTransformer.java
index 1bd9f6a..7c13e18 100644
--- a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesTransformer.java
+++ b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestBasicTablesTransformer.java
@@ -25,6 +25,7 @@ import org.apache.drill.metastore.metadata.MetadataType;
 import org.apache.drill.metastore.metadata.PartitionMetadata;
 import org.apache.drill.metastore.metadata.RowGroupMetadata;
 import org.apache.drill.metastore.metadata.SegmentMetadata;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -36,7 +37,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 @Category(MetastoreTest.class)
-public class TestBasicTablesTransformer {
+public class TestBasicTablesTransformer extends BaseTest {
 
   @Test
   public void testTables() {
diff --git a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestMetastoreTableInfo.java b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestMetastoreTableInfo.java
index 90ce610..699a18c 100644
--- a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestMetastoreTableInfo.java
+++ b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestMetastoreTableInfo.java
@@ -19,6 +19,7 @@ package org.apache.drill.metastore.components.tables;
 
 import org.apache.drill.categories.MetastoreTest;
 import org.apache.drill.metastore.metadata.TableInfo;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -26,7 +27,7 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
 @Category(MetastoreTest.class)
-public class TestMetastoreTableInfo {
+public class TestMetastoreTableInfo extends BaseTest {
 
   @Test
   public void testAbsentTable() {
diff --git a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestTableMetadataUnitConversion.java b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestTableMetadataUnitConversion.java
index 7f49947..c31c254 100644
--- a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestTableMetadataUnitConversion.java
+++ b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/components/tables/TestTableMetadataUnitConversion.java
@@ -34,6 +34,7 @@ import org.apache.drill.metastore.metadata.TableInfo;
 import org.apache.drill.metastore.statistics.ColumnStatistics;
 import org.apache.drill.metastore.statistics.ColumnStatisticsKind;
 import org.apache.drill.metastore.statistics.StatisticsHolder;
+import org.apache.drill.test.BaseTest;
 import org.apache.hadoop.fs.Path;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -53,7 +54,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
 
 @Category(MetastoreTest.class)
-public class TestTableMetadataUnitConversion {
+public class TestTableMetadataUnitConversion extends BaseTest {
 
   private static Data data;
 
diff --git a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/metadata/MetadataSerDeTest.java b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/metadata/MetadataSerDeTest.java
index 6a2d36d..520e59e 100644
--- a/metastore/metastore-api/src/test/java/org/apache/drill/metastore/metadata/MetadataSerDeTest.java
+++ b/metastore/metastore-api/src/test/java/org/apache/drill/metastore/metadata/MetadataSerDeTest.java
@@ -24,6 +24,7 @@ import org.apache.drill.metastore.statistics.ColumnStatistics;
 import org.apache.drill.metastore.statistics.ColumnStatisticsKind;
 import org.apache.drill.metastore.statistics.StatisticsHolder;
 import org.apache.drill.metastore.statistics.TableStatisticsKind;
+import org.apache.drill.test.BaseTest;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -36,7 +37,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
 
 @Category(MetastoreTest.class)
-public class MetadataSerDeTest {
+public class MetadataSerDeTest extends BaseTest {
 
   @Test
   public void testStatisticsHolderSerialization() {
diff --git a/pom.xml b/pom.xml
index 7f04737..c551747 100644
--- a/pom.xml
+++ b/pom.xml
@@ -753,26 +753,7 @@
               <goals>
                 <goal>test</goal>
               </goals>
-            <!-- TODO: Remove excludedGroups after DRILL-7393 is fixed. -->
-            <configuration>
-              <excludedGroups>org.apache.drill.categories.MetastoreTest,${excludedGroups}</excludedGroups>
-            </configuration>
           </execution>
-          <!--
-              All Metastore tests must run in separate a JVM to ensure
-              that Guava Preconditions class is patched before execution.
-              TODO: Remove execution block for metastore-test after DRILL-7393 is fixed.
-          -->
-            <execution>
-              <id>metastore-test</id>
-              <phase>test</phase>
-              <goals>
-                <goal>test</goal>
-              </goals>
-              <configuration>
-                <groups>org.apache.drill.categories.MetastoreTest</groups>
-              </configuration>
-            </execution>
           </executions>
           <dependencies>
             <dependency>


[drill] 11/11: Add Volodymyr's PGP key

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit 51df52d223932e48284b000967c506cab7fa3524
Author: Volodymyr Vysotskyi <vv...@gmail.com>
AuthorDate: Wed Dec 4 13:51:49 2019 +0200

    Add Volodymyr's PGP key
---
 KEYS | 297 ++++++++++++++++++++++++++++++++++++++++++++-----------------------
 1 file changed, 194 insertions(+), 103 deletions(-)

diff --git a/KEYS b/KEYS
index 611f7f9..24724b3 100644
--- a/KEYS
+++ b/KEYS
@@ -84,109 +84,109 @@ P5iIjn4zOZYYd6Fobkpw3vi03ifusAeOzwpCmw==
 =6NWa
 -----END PGP PUBLIC KEY BLOCK-----
 
-pub   2048R/9599549D 2014-07-30 [expires: 2018-08-24]
-uid                  Aditya Kishore <ad...@gmail.com>
-sig 3        9599549D 2014-09-27  Aditya Kishore <ad...@gmail.com>
-sig 3        9599549D 2014-07-30  Aditya Kishore <ad...@gmail.com>
-sig 3        0410DA0C 2014-09-27  Ted Dunning (for signing Apache releases) <te...@gmail.com>
-sig 3        214EB15B 2014-09-27  Ted Dunning (personal) <te...@gmail.com>
-uid                  Aditya Kishore <ad...@apache.org>
-sig 3        9599549D 2014-09-27  Aditya Kishore <ad...@gmail.com>
-sig 3        0410DA0C 2014-09-27  Ted Dunning (for signing Apache releases) <te...@gmail.com>
-sig 3        214EB15B 2014-09-27  Ted Dunning (personal) <te...@gmail.com>
-sub   2048R/CDF0E9C1 2014-07-30
-sig          9599549D 2014-07-30  Aditya Kishore <ad...@gmail.com>
-
------BEGIN PGP PUBLIC KEY BLOCK-----
-Version: GnuPG v2.0.22 (MingW32)
-
-mQENBFPYQfgBCAChFbL+Prxubfz/XEpoxwV3GjDQpQ4mHj1hMwqARYa3FT4AtA6c
-AtHx++XDzefvAJ+Vihvk6278iAmFVw04Vlp9YRgZSl2opwnz0akG48cA9JY42+tM
-MXgdZHgK+omzgpq2xtwAho6p5Q96OUm4A91M+MouWwJRz0XN5T1glZlB1dBBAjSL
-3WbO2TH4jKBwDa5Z6sSzpWeQq3FtlwvQ/fK0sdq1NffltZ8EL8Iy7TyXfKLDArZE
-tVs7KoAQOBrALFfMo0xw2fX/fa3EYcuMNQoRPkYcAUfoafynzq8ZNUt12FMDj1N/
-gkZijNiu8ssV2IgxTMYf+OcbpxwJHZjD372dABEBAAG0H0FkaXR5YSBLaXNob3Jl
-IDxhZGlAYXBhY2hlLm9yZz6JAT8EEwECACkCGwMHCwkIBwMCAQYVCAIJCgsEFgID
-AQIeAQIXgAUCVCb8xAUJB6gTuAAKCRDjteTjlZlUnZcAB/wOPESF/K4KcI/yPWSc
-XlcAQL6TxdIQCgM1nNkt/iNNm9q1nTkPHCE6cZ2ROa+yvqP0oLN1BtiHgqYdDi09
-SB9DBVq7h1kLYz9DrYZZ+DifS8yWNW8jPxYkllU3fhy9KzTUoItHed7rg1anv20O
-ROuG43SUIEIHjvLr47BEV3LrkvAijIcOa1x0aFACDn9S2UleaNfIhj1BpZqy9Y9m
-tJGpvI/8J2JlKX5KNJl8nEXlPWa13i+ef8jxJmtVxUVKyaNYzqrJ86ttDmzCEs0k
-z/Kc4N29AbD4H7FIij48epboafnQxOaUfOsz3pt/b9zl/d2u/FCII1gp8V2s0Yc6
-a6E5iQIcBBMBCAAGBQJUJyzSAAoJENBWhWoEENoMDgsP/183CApZfBIO5ooYqmNa
-I0xzH3chCF138LQG5EsZ2yxt8kpCRem8SoL+BYpes53xL8HZRjumeVxd4Q+5PIfH
-lSgE6eAljB7zWfPRf6QyJihp0mwa6tjrDRk8vnLZj6b3IpUM+XdbNCO3NfxCm+DJ
-5FZDm+rrFTZcoKJhYwMhKOO52l22PBBrO52p+LW+b4E1FOf7G7GLYt0lZjNc/5Fr
-w38wa8x5RjQtfgqwlbkUtAIMKi6k9ScTM878AQsmdEWF3qTzUZEkAaqPU49jbpTi
-87rhM1It8u0D9eNhdziGKLHRaJe3GzKfYUa/zw/3Q5Eywh/AONiOtq3uZFieSBj8
-Xb9yMMp919uG2BI+8jKJN8JTSgeklekawuUryS8nUE+TWjnqGE/dW8p1dwXWSGDN
-IG132cHI2JlTAeH6T00SX0zYIw45/2FyJp7rIGdW3BbSc6SI7SqPjJU9SxAwTSZQ
-N+bFTbMtvsmTvrcRm6uxMZNjt4Lx7MEbYSC9/sAnEznNx/TucsNEkYSna+vStzaP
-mrUQgIM9DpFodPB8Gs0tvIv/PHCiQSCzxV7kw7jXqKHoqPsPOwNCDc10K4YVBmJU
-SYVn4cB/FvW6YmPblb1vZ0wg8ndYN3fn/280lpN/4WSXmhvhpQAt65dVKYbMMIPn
-YbPDMAkAUVrbeQAzKKPqETytiQIcBBMBCAAGBQJUJy0RAAoJEP5x0hMhTrFbNf4P
-/1aJbsvnlh83BOiYIKnSGpExyo59XdMGYkgPC40PjKG8psQ79ksnuMabpBR/rmdq
-rvG1AX3AsFMJKj3simouKvQ92SN4qLCRZ2Rvr52NAKoOWzE+ExG1yzgZwDeALbmv
-L9Ar6AD9iBI1HeMp9yh2Ml8hyjVmTg/5aVM+mMIe996yKLhKixk75PZtGrsgQwaN
-FgmjN4olAygJ6u/7RiWD+7c6fG9Kz5T8BpnSBPga/533QquZ6sxrjV1rkccua7Tx
-P8sqcedGHzpthPyPoPRVVbsABPxnTAKONPtq7Oc5/pMbmKFXsL8WcQSop3STBTgN
-7XjVhGb5iBpTsnQ0JtldY6+gRzjRDRje+QXa9LekAwOQkgxbWxHZNIblFnIYjdIb
-8qJ5ilsBv3/AEObbDAIZCHHAICi2yIdjoMZNO/gCpr8xbym2XTMkv+eTUMuwsNBs
-c3c9x6CfWCrslojGmOsBZ8OlrCYPsyhdjONicokKBMpWzH/a6ICmf72smWOs8pOw
-YrYe9ijNXoGOuLHH/cRUaCHdXSbxztKumFD0Lsvr7Qq50Oa6Vrof0k8OtLeMMYr7
-v9DJqoPUf0dlrRFyKd90kbDbrNgvyhV9UEteQ1H9AGQCb5xoU0L52MCP2YdPapRr
-5yCyc+M1myqpDFi1uv23wu6a5mspcGeBUmzl963n16b2tChBZGl0eWEgS2lzaG9y
-ZSA8YWRpdHlha2lzaG9yZUBnbWFpbC5jb20+iQE+BBMBAgAoAhsDBgsJCAcDAgYV
-CAIJCgsEFgIDAQIeAQIXgAUCVCb8xAUJB6gTuAAKCRDjteTjlZlUnWPGB/wJ8ly4
-9Yh8xhjJc+A8hJ9MQ6s8M3DdWIs/+QBfSxmqbl+ngreWfHuuF5ifGiZsvMP93w9x
-GZApuBfZ5tUBuov/IQRgCAjjN7dBdBDWrw5PoO0iiB4TSh/f+6RX0KDpDq3kglTL
-5MVY/DfFxlGhazNyv4dRc++R3RHyFqe6dT6Miwyn0XFPsa6DNSHg07N3xoFqIR/f
-epYWyqfJqtVQfeLtsxleXdyz1IFkg/vZEnDxdfS1dDJeakqISQiL8OfKKyHlZZ3M
-lsCQbVZlS6ReWD2Ws6XpLAlEog0v7DEw9PoSvImyLk16qnywrW4LvM7E3TthzWJF
-FmvefIxECATQnhjZiQE4BBMBAgAiBQJT2EH4AhsDBgsJCAcDAgYVCAIJCgsEFgID
-AQIeAQIXgAAKCRDjteTjlZlUnVesCACLO7cT1pY74di8Tl4K/hDWliUDhsfg0Qf2
-A7aKPRGC3kmk2NOEpbqIyfyomrlnWhJoAn8txmrBvlTwsIqAplZQez9owJ5ulNiq
-jOAzQm5ZKRLw+kBKA+TjQDWfWFn5IEcsL7TemXIBIfPqbnPIkylPGe1XIcE9Aidk
-3Bpn9gSqx/3OXDNN5UVI3i4BnF7mmx+DIqp3kj67LJdHz9C7SMMStKcz66d/H+6n
-bMmmv1R8jjnLSW1muttIWgfg3Eh2ys+Wn1Pasl/dRY5zsN4HcGkozWhmLu9KuAgy
-zABwhU6if9xLAZoMACPCP0f5P0+rKiOK7ImVo6YvLgH+E5xIkWbniQIcBBMBCAAG
-BQJUJyzSAAoJENBWhWoEENoMw8YP/i1cAyp8g2FLpBThn46sS4h23szRJXb/48S8
-UJHT8JsQ+s47Ut3pXPe+rpF5tMx8dNeAmQtT1KJXzuj8K9J5Vks5VZpxhTwiQQOe
-aC/fZFUTnTrOvbgT1MIUrIOEmct79TGnD0y4V2rnxM6236K/JmNLOBZqUBr/i2eM
-IEacJLIm63+BZ1CCt9Vimgghxms1dymePYHBVb05apHdYCrXOcFIBY7U7jZgUERd
-XkVwEfBvbSIV0pDZskVfnTMRObQmBTui3paYshb4uHXGkgEWAs3k4nPRN26GcYM0
-28SHzC3CUMz3ZHFd3+ZpHmdy+DrTpDb6t6XQq8ah+Ai6lWutBt1SE2M1YHsHgGEg
-jjcDWS8cF7tjFaqpL7Z7fKXoVMxt9mRN3raROFGtyiuvFW5Is482xoW3QGpWV4RR
-0iimfjwKdB/FN+vO+CCEApHCg9k5zqFZV1ZQk+vOwoDa9bxivoHtM3FtC9sZoJGp
-U7MEGjm11mJGdlp1V45ph6mNggXQD+Db9TGqr7oZHrH1K4nU0XNrudHW9Omu9son
-Ycojpxj1VHg8RdH7nit9qAHT/fSZWhsqqK+/9aVWSQj5qb5fzoPD9D/N3DdhCbNF
-5RCJKuO3/lmXPHvRup9N1gV1HC3rcRGCFL3wLmusK+bDr8epelNEmQPkKieChi9A
-6Kxsy48diQIcBBMBCAAGBQJUJy0RAAoJEP5x0hMhTrFb0m4P/22RF89IuXcCgUad
-JmMgFax7ncZGEgdvOIA3yza5RbaaDsONwXQv6nHWJMwGkk/ywSZdzwUO0pcknlvq
-DzUkF9P3HUGzT/qUkHtwciZUWZVURt0oKtozO44gDwTJmkZaXV0Sj0dbAaN6FgTa
-VX/F61dKttgAyIbVkndszLXFX8SzriE86CJVKZQLhsJDQNCOQADO0lqjMXAEUaEo
-g3cLR5c9VgR4ZgvFWnCSqG7wdoiSvm94u2E19Qz7lidxNFwxljc1NMVh0qBp7MIY
-1uh3UB3zI/E1NEkaurPZAxsY/6lqEZf+k8ZJVzKBzBmSIKLdeOdJi+1m/J4jnTy6
-AsSPkQHc+l5A9hBM/5FIYjn0Wibsduml94n/V2JJsn14mNjdBvyN2GcIT9Fke7fb
-Pa10MYWme4mxNbUHl2rMq+3XIN5i0A4eFlHIScVdXlcdJIDs2jsR2Suv/AvyygfK
-M8sDfbwhvZ+aAq6SzSBYwQxACRWUGZ6FhtEE6nptWNv9k8aUVXKiY/4zJ8NMKVGS
-NieEDzDRgkCpCr3qRRgwfWYRl0RZmR+B6Jd+suxgbFSjd+Gs6oUE8hZkPEEIVvAB
-UnWLf60nj5IHd/ejBLURO0vULg7VFaH4dJ0LX3/ge6jHJG1TfNex63FRjsV/EbF4
-vMmV2Ttx5aNGqt45sRFNMxwZZfANuQENBFPYQfgBCACurM8U/S5ODUEKQelZX6rn
-3IOFuUEF63tis5oV3hcqpO899DQqwCsbM+bq2RTOLOGGQi65KpLrTVTtxinLiT9l
-pW3NKsxNKXm+WmNJS6nNWhZY1qvYBDQSVF9DdvojAqRf69ECG18hV+Yo8FD0tdT9
-tHYWLuXyIaRfeflJ/yTcvXvHO/Wpdf/uwS5wABIfHNgtgNIUpcE8Mn2vQXIWZARd
-UWo9IbC6NpjzgnSv/e1KzRkRDKAHkkONjRKCTd8rZLugOBS86JRwTNhZSLq+yAEM
-YV+a5ZKJN129ObVA+MS36SknomwK60E8Az8+ntczZ3CXfap5fs9Gc9y2JuZmkrYv
-ABEBAAGJAR8EGAECAAkFAlPYQfgCGwwACgkQ47Xk45WZVJ2roAf9HLax9I2N7zrj
-CCjOdwtkprKUM5Ns11us0FIkmqMjzDkiHbQVuOZKtMBO4hYgKb81vZR4ZaiX8c/s
-DSO8vXGpThBCOGhHWPltuzrMj22I/jGs3E/bzkAZWDfeIOy0Cr8f+1tqgKWphSVS
-bb74qjLof6Q+aUIAlMxyrZL1SLaEw6S9Wvc2rLgRqgi+8M76sy5MJ9z0UPc6nWYh
-RJL5qBwRYc6yQxjvy6tUGqiByS4Z1dcBfPUp790BNHZg/4Oty2RE7gRcCyHUVY88
-IHTg7fxykf15yoE0/Yt8HZIUi6nseffUvYBoF+NURDG5KWMtsBVgmy6Xkz77iBOu
-ueIMpaeK7w==
-=Kovq
------END PGP PUBLIC KEY BLOCK-----
+pub   2048R/9599549D 2014-07-30 [expires: 2018-08-24]
+uid                  Aditya Kishore <ad...@gmail.com>
+sig 3        9599549D 2014-09-27  Aditya Kishore <ad...@gmail.com>
+sig 3        9599549D 2014-07-30  Aditya Kishore <ad...@gmail.com>
+sig 3        0410DA0C 2014-09-27  Ted Dunning (for signing Apache releases) <te...@gmail.com>
+sig 3        214EB15B 2014-09-27  Ted Dunning (personal) <te...@gmail.com>
+uid                  Aditya Kishore <ad...@apache.org>
+sig 3        9599549D 2014-09-27  Aditya Kishore <ad...@gmail.com>
+sig 3        0410DA0C 2014-09-27  Ted Dunning (for signing Apache releases) <te...@gmail.com>
+sig 3        214EB15B 2014-09-27  Ted Dunning (personal) <te...@gmail.com>
+sub   2048R/CDF0E9C1 2014-07-30
+sig          9599549D 2014-07-30  Aditya Kishore <ad...@gmail.com>
+
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+Version: GnuPG v2.0.22 (MingW32)
+
+mQENBFPYQfgBCAChFbL+Prxubfz/XEpoxwV3GjDQpQ4mHj1hMwqARYa3FT4AtA6c
+AtHx++XDzefvAJ+Vihvk6278iAmFVw04Vlp9YRgZSl2opwnz0akG48cA9JY42+tM
+MXgdZHgK+omzgpq2xtwAho6p5Q96OUm4A91M+MouWwJRz0XN5T1glZlB1dBBAjSL
+3WbO2TH4jKBwDa5Z6sSzpWeQq3FtlwvQ/fK0sdq1NffltZ8EL8Iy7TyXfKLDArZE
+tVs7KoAQOBrALFfMo0xw2fX/fa3EYcuMNQoRPkYcAUfoafynzq8ZNUt12FMDj1N/
+gkZijNiu8ssV2IgxTMYf+OcbpxwJHZjD372dABEBAAG0H0FkaXR5YSBLaXNob3Jl
+IDxhZGlAYXBhY2hlLm9yZz6JAT8EEwECACkCGwMHCwkIBwMCAQYVCAIJCgsEFgID
+AQIeAQIXgAUCVCb8xAUJB6gTuAAKCRDjteTjlZlUnZcAB/wOPESF/K4KcI/yPWSc
+XlcAQL6TxdIQCgM1nNkt/iNNm9q1nTkPHCE6cZ2ROa+yvqP0oLN1BtiHgqYdDi09
+SB9DBVq7h1kLYz9DrYZZ+DifS8yWNW8jPxYkllU3fhy9KzTUoItHed7rg1anv20O
+ROuG43SUIEIHjvLr47BEV3LrkvAijIcOa1x0aFACDn9S2UleaNfIhj1BpZqy9Y9m
+tJGpvI/8J2JlKX5KNJl8nEXlPWa13i+ef8jxJmtVxUVKyaNYzqrJ86ttDmzCEs0k
+z/Kc4N29AbD4H7FIij48epboafnQxOaUfOsz3pt/b9zl/d2u/FCII1gp8V2s0Yc6
+a6E5iQIcBBMBCAAGBQJUJyzSAAoJENBWhWoEENoMDgsP/183CApZfBIO5ooYqmNa
+I0xzH3chCF138LQG5EsZ2yxt8kpCRem8SoL+BYpes53xL8HZRjumeVxd4Q+5PIfH
+lSgE6eAljB7zWfPRf6QyJihp0mwa6tjrDRk8vnLZj6b3IpUM+XdbNCO3NfxCm+DJ
+5FZDm+rrFTZcoKJhYwMhKOO52l22PBBrO52p+LW+b4E1FOf7G7GLYt0lZjNc/5Fr
+w38wa8x5RjQtfgqwlbkUtAIMKi6k9ScTM878AQsmdEWF3qTzUZEkAaqPU49jbpTi
+87rhM1It8u0D9eNhdziGKLHRaJe3GzKfYUa/zw/3Q5Eywh/AONiOtq3uZFieSBj8
+Xb9yMMp919uG2BI+8jKJN8JTSgeklekawuUryS8nUE+TWjnqGE/dW8p1dwXWSGDN
+IG132cHI2JlTAeH6T00SX0zYIw45/2FyJp7rIGdW3BbSc6SI7SqPjJU9SxAwTSZQ
+N+bFTbMtvsmTvrcRm6uxMZNjt4Lx7MEbYSC9/sAnEznNx/TucsNEkYSna+vStzaP
+mrUQgIM9DpFodPB8Gs0tvIv/PHCiQSCzxV7kw7jXqKHoqPsPOwNCDc10K4YVBmJU
+SYVn4cB/FvW6YmPblb1vZ0wg8ndYN3fn/280lpN/4WSXmhvhpQAt65dVKYbMMIPn
+YbPDMAkAUVrbeQAzKKPqETytiQIcBBMBCAAGBQJUJy0RAAoJEP5x0hMhTrFbNf4P
+/1aJbsvnlh83BOiYIKnSGpExyo59XdMGYkgPC40PjKG8psQ79ksnuMabpBR/rmdq
+rvG1AX3AsFMJKj3simouKvQ92SN4qLCRZ2Rvr52NAKoOWzE+ExG1yzgZwDeALbmv
+L9Ar6AD9iBI1HeMp9yh2Ml8hyjVmTg/5aVM+mMIe996yKLhKixk75PZtGrsgQwaN
+FgmjN4olAygJ6u/7RiWD+7c6fG9Kz5T8BpnSBPga/533QquZ6sxrjV1rkccua7Tx
+P8sqcedGHzpthPyPoPRVVbsABPxnTAKONPtq7Oc5/pMbmKFXsL8WcQSop3STBTgN
+7XjVhGb5iBpTsnQ0JtldY6+gRzjRDRje+QXa9LekAwOQkgxbWxHZNIblFnIYjdIb
+8qJ5ilsBv3/AEObbDAIZCHHAICi2yIdjoMZNO/gCpr8xbym2XTMkv+eTUMuwsNBs
+c3c9x6CfWCrslojGmOsBZ8OlrCYPsyhdjONicokKBMpWzH/a6ICmf72smWOs8pOw
+YrYe9ijNXoGOuLHH/cRUaCHdXSbxztKumFD0Lsvr7Qq50Oa6Vrof0k8OtLeMMYr7
+v9DJqoPUf0dlrRFyKd90kbDbrNgvyhV9UEteQ1H9AGQCb5xoU0L52MCP2YdPapRr
+5yCyc+M1myqpDFi1uv23wu6a5mspcGeBUmzl963n16b2tChBZGl0eWEgS2lzaG9y
+ZSA8YWRpdHlha2lzaG9yZUBnbWFpbC5jb20+iQE+BBMBAgAoAhsDBgsJCAcDAgYV
+CAIJCgsEFgIDAQIeAQIXgAUCVCb8xAUJB6gTuAAKCRDjteTjlZlUnWPGB/wJ8ly4
+9Yh8xhjJc+A8hJ9MQ6s8M3DdWIs/+QBfSxmqbl+ngreWfHuuF5ifGiZsvMP93w9x
+GZApuBfZ5tUBuov/IQRgCAjjN7dBdBDWrw5PoO0iiB4TSh/f+6RX0KDpDq3kglTL
+5MVY/DfFxlGhazNyv4dRc++R3RHyFqe6dT6Miwyn0XFPsa6DNSHg07N3xoFqIR/f
+epYWyqfJqtVQfeLtsxleXdyz1IFkg/vZEnDxdfS1dDJeakqISQiL8OfKKyHlZZ3M
+lsCQbVZlS6ReWD2Ws6XpLAlEog0v7DEw9PoSvImyLk16qnywrW4LvM7E3TthzWJF
+FmvefIxECATQnhjZiQE4BBMBAgAiBQJT2EH4AhsDBgsJCAcDAgYVCAIJCgsEFgID
+AQIeAQIXgAAKCRDjteTjlZlUnVesCACLO7cT1pY74di8Tl4K/hDWliUDhsfg0Qf2
+A7aKPRGC3kmk2NOEpbqIyfyomrlnWhJoAn8txmrBvlTwsIqAplZQez9owJ5ulNiq
+jOAzQm5ZKRLw+kBKA+TjQDWfWFn5IEcsL7TemXIBIfPqbnPIkylPGe1XIcE9Aidk
+3Bpn9gSqx/3OXDNN5UVI3i4BnF7mmx+DIqp3kj67LJdHz9C7SMMStKcz66d/H+6n
+bMmmv1R8jjnLSW1muttIWgfg3Eh2ys+Wn1Pasl/dRY5zsN4HcGkozWhmLu9KuAgy
+zABwhU6if9xLAZoMACPCP0f5P0+rKiOK7ImVo6YvLgH+E5xIkWbniQIcBBMBCAAG
+BQJUJyzSAAoJENBWhWoEENoMw8YP/i1cAyp8g2FLpBThn46sS4h23szRJXb/48S8
+UJHT8JsQ+s47Ut3pXPe+rpF5tMx8dNeAmQtT1KJXzuj8K9J5Vks5VZpxhTwiQQOe
+aC/fZFUTnTrOvbgT1MIUrIOEmct79TGnD0y4V2rnxM6236K/JmNLOBZqUBr/i2eM
+IEacJLIm63+BZ1CCt9Vimgghxms1dymePYHBVb05apHdYCrXOcFIBY7U7jZgUERd
+XkVwEfBvbSIV0pDZskVfnTMRObQmBTui3paYshb4uHXGkgEWAs3k4nPRN26GcYM0
+28SHzC3CUMz3ZHFd3+ZpHmdy+DrTpDb6t6XQq8ah+Ai6lWutBt1SE2M1YHsHgGEg
+jjcDWS8cF7tjFaqpL7Z7fKXoVMxt9mRN3raROFGtyiuvFW5Is482xoW3QGpWV4RR
+0iimfjwKdB/FN+vO+CCEApHCg9k5zqFZV1ZQk+vOwoDa9bxivoHtM3FtC9sZoJGp
+U7MEGjm11mJGdlp1V45ph6mNggXQD+Db9TGqr7oZHrH1K4nU0XNrudHW9Omu9son
+Ycojpxj1VHg8RdH7nit9qAHT/fSZWhsqqK+/9aVWSQj5qb5fzoPD9D/N3DdhCbNF
+5RCJKuO3/lmXPHvRup9N1gV1HC3rcRGCFL3wLmusK+bDr8epelNEmQPkKieChi9A
+6Kxsy48diQIcBBMBCAAGBQJUJy0RAAoJEP5x0hMhTrFb0m4P/22RF89IuXcCgUad
+JmMgFax7ncZGEgdvOIA3yza5RbaaDsONwXQv6nHWJMwGkk/ywSZdzwUO0pcknlvq
+DzUkF9P3HUGzT/qUkHtwciZUWZVURt0oKtozO44gDwTJmkZaXV0Sj0dbAaN6FgTa
+VX/F61dKttgAyIbVkndszLXFX8SzriE86CJVKZQLhsJDQNCOQADO0lqjMXAEUaEo
+g3cLR5c9VgR4ZgvFWnCSqG7wdoiSvm94u2E19Qz7lidxNFwxljc1NMVh0qBp7MIY
+1uh3UB3zI/E1NEkaurPZAxsY/6lqEZf+k8ZJVzKBzBmSIKLdeOdJi+1m/J4jnTy6
+AsSPkQHc+l5A9hBM/5FIYjn0Wibsduml94n/V2JJsn14mNjdBvyN2GcIT9Fke7fb
+Pa10MYWme4mxNbUHl2rMq+3XIN5i0A4eFlHIScVdXlcdJIDs2jsR2Suv/AvyygfK
+M8sDfbwhvZ+aAq6SzSBYwQxACRWUGZ6FhtEE6nptWNv9k8aUVXKiY/4zJ8NMKVGS
+NieEDzDRgkCpCr3qRRgwfWYRl0RZmR+B6Jd+suxgbFSjd+Gs6oUE8hZkPEEIVvAB
+UnWLf60nj5IHd/ejBLURO0vULg7VFaH4dJ0LX3/ge6jHJG1TfNex63FRjsV/EbF4
+vMmV2Ttx5aNGqt45sRFNMxwZZfANuQENBFPYQfgBCACurM8U/S5ODUEKQelZX6rn
+3IOFuUEF63tis5oV3hcqpO899DQqwCsbM+bq2RTOLOGGQi65KpLrTVTtxinLiT9l
+pW3NKsxNKXm+WmNJS6nNWhZY1qvYBDQSVF9DdvojAqRf69ECG18hV+Yo8FD0tdT9
+tHYWLuXyIaRfeflJ/yTcvXvHO/Wpdf/uwS5wABIfHNgtgNIUpcE8Mn2vQXIWZARd
+UWo9IbC6NpjzgnSv/e1KzRkRDKAHkkONjRKCTd8rZLugOBS86JRwTNhZSLq+yAEM
+YV+a5ZKJN129ObVA+MS36SknomwK60E8Az8+ntczZ3CXfap5fs9Gc9y2JuZmkrYv
+ABEBAAGJAR8EGAECAAkFAlPYQfgCGwwACgkQ47Xk45WZVJ2roAf9HLax9I2N7zrj
+CCjOdwtkprKUM5Ns11us0FIkmqMjzDkiHbQVuOZKtMBO4hYgKb81vZR4ZaiX8c/s
+DSO8vXGpThBCOGhHWPltuzrMj22I/jGs3E/bzkAZWDfeIOy0Cr8f+1tqgKWphSVS
+bb74qjLof6Q+aUIAlMxyrZL1SLaEw6S9Wvc2rLgRqgi+8M76sy5MJ9z0UPc6nWYh
+RJL5qBwRYc6yQxjvy6tUGqiByS4Z1dcBfPUp790BNHZg/4Oty2RE7gRcCyHUVY88
+IHTg7fxykf15yoE0/Yt8HZIUi6nseffUvYBoF+NURDG5KWMtsBVgmy6Xkz77iBOu
+ueIMpaeK7w==
+=Kovq
+-----END PGP PUBLIC KEY BLOCK-----
 pub   4096R/8DA303B2 2015-10-03
 uid                  Abdelhakim Deneche (CODE SIGNING KEY) <ad...@apache.org>
 sig 3        8DA303B2 2015-10-03  Abdelhakim Deneche (CODE SIGNING KEY) <ad...@apache.org>
@@ -888,3 +888,94 @@ m8D4W7gK/0JwAt+KjWfJCOBAcL6iF9bNcdASoeyRO+V/HRMetgyqJvOfz4PVhETE
 dEFlcbJRMhKySu13vlXzb9HaekPyoFNdNHre44eudrE=
 =Flch
 -----END PGP PUBLIC KEY BLOCK-----
+pub   rsa2048 2018-03-15 [SC]
+      8D152C4A0D5A7EE955A46A97D654297DAE5E0CCA
+uid           [ultimate] Volodymyr Vysotskyi (Volodymyr Vysotskyi works at CyberVision) <vo...@apache.org>
+sig 3        D654297DAE5E0CCA 2019-12-04  Volodymyr Vysotskyi (Volodymyr Vysotskyi works at CyberVision) <vo...@apache.org>
+sig          DDB6E9812AD3FAE3 2018-07-18  Julian Hyde (CODE SIGNING KEY) <jh...@apache.org>
+sig       X  9710B89BCA57AD7C 2018-08-02  PGP Global Directory Verification Key
+sig       X  9710B89BCA57AD7C 2018-10-21  PGP Global Directory Verification Key
+sig       X  9710B89BCA57AD7C 2018-11-03  PGP Global Directory Verification Key
+sig       X  9710B89BCA57AD7C 2018-11-17  PGP Global Directory Verification Key
+sig          477CA01DF573D4A6 2018-11-30  Vitalii Diravka (Apache release signing key) <vi...@apache.org>
+sub   rsa2048 2018-03-15 [E]
+sig          D654297DAE5E0CCA 2018-03-15  Volodymyr Vysotskyi (Volodymyr Vysotskyi works at CyberVision) <vo...@apache.org>
+
+-----BEGIN PGP PUBLIC KEY BLOCK-----
+
+mQENBFqqYckBCADE7eMvUi8mT8HULvxS2fcGoCZ0sKug11wavpm7l6f2BkFOIXFv
+/FRaZ2PGvUA0nb1nBZ2sIBP/AtoZJMPdxUngdLlgZ9GbkbvkuDNcD1lFDtm49m93
+IAQ5VWrzyY3KUjxA7tDv3c0ojgpnqu9S94pX8jQ/oF4DYmssHftPuVSFweaQke36
+JAB27A9XfpaLDvmYDCL8H2B6PNpeNCoeyXl/dBxpaoju9NPc1dnALa55MCSm+MP7
+pRqE8wHDj+QDkglEUhpOsQM2vERO4Xzh9RY8jk4KAaKBr1qoc7H9ZAY7zCexoeVW
++l4cNI8YCrO62FRR/RFKV9f0lYIla67nRzVnABEBAAG0VVZvbG9keW15ciBWeXNv
+dHNreWkgKFZvbG9keW15ciBWeXNvdHNreWkgd29ya3MgYXQgQ3liZXJWaXNpb24p
+IDx2b2xvZHlteXJAYXBhY2hlLm9yZz6JAU4EEwECADgCGwMCHgECF4AWIQSNFSxK
+DVp+6VWkapfWVCl9rl4MygUCXeeY9QULCQgHAwUVCgkICwUWAgMBAAAKCRDWVCl9
+rl4Mys5iCAC1YnFDuCrTAZdw+JQ65ebHHWmPJVtbOg/0tZsHV/Hq0eGIf/xpbESX
+u08LpECuW+8xkTWMmAzg9iXwcgG0M+Vd5CLljPre5bcWTGOi/gyv+ZGUn38vQZ2h
+PaRNDPa10W2cjqN+1puyhSgXn8EHq7etbWmYDObGK+ClDVOytLDeaRH0zlOp24VA
+J32lWDvP1sXRGP/FWuwKvIMp2LZXCasgdE0MSL07A3kMt73Wb6UiRgaBFNaqh8Wu
+5dRpJ4GUH0AquXD5Iys+vlruk0Aq2Jzob5LGTiezz21KZdDo6EIXWicw1JnxUZ8f
+zCNO6Ht+gU5pFDaOqoH25/QGuYvRKBiBiQIcBBABAgAGBQJbT2jwAAoJEN226YEq
+0/rjggIP/2Zs7AcM+pJ3xRdYHX50eLI7nl6vqsia8P2fM8II0Am8RV9Vdrr41uLU
+MRaSdyfDSJJ0p1WC0epe0dMQ2iPAikjvwRkl3LTOihJvdSXyW53hBbp9M+FgY682
+2MBy0O4l+fnCiQhoYbx7+YEkKPfwWj7yhdAtvcmAYWQKzxkIHyLyr3f5iOeM/wHT
+IBNy/mTF82b9mQUlxoi1S3kHlAf9bv3tvTTGuAXsuO7k7et/Ai2Le+tGeNQmlPAV
+5PYiXUmRq+o6pgTodhBYrmZrkoEbdthSVc50obOeanG4NzJkF0pj7R+tWEtaGUVM
+Bjw3RMfu2v9H7O3ncdtN140fpxLjj3OQlQVWUuewP8cVQEQwBreCqpaHeciikBJ/
+5p/S0DfeBVXfztwR9wjQwEfetRwBkX/oNJwEno+UmsWWFuXMGdRHhDwgh/EV2/DX
+nCei719jeqZc4vqF9JxvSPymXWtxYf8ACgT5rdTtNZwelzUV3BQnFfJuljRg6ced
+4PzDmZ7UqF+glGPEX46ZFl8dNoemFmFP8wJSVPXFoeipaJP0SAjViyUB2Bk+fh9n
+V4W4NwPidQPH6Bw5A6je5vamiUDRE4192jIlvN42JrWMcpDadI8fhOYGvMZzqk3v
+uub3kRcK2uL0wF+UvDo93wy3GsWeKGaThMp9B7JrXUNFI+LYWHsBiQEiBBABAgAM
+BQJbYup9BQMAEnUAAAoJEJcQuJvKV618VUcH/3eVwOfsEOLjEIJmCLQMpX/RKeFE
+wGP7XKAdM9u2djgoqlWX07k/zCos+9vbQPSh50kd1bnlQNAbMWNgKztzvp1gTMor
+gV4734hgIelHktHEjD7cI1GobBjqhHGytPTgOha2CQ/EcHRfKI0/Q0wK1jn38hGF
+sZOcSJ7IXq1+oSaXYOzeiG9G6bwdy+4MAzNNXH6QtD6ax3ZPatnsRaVrd4VJDNek
+BjYqkLXHqCs6hwjeP11uXVofLTLgvom30Gik/90df674ibPAsD9YuAZ3xr3uxE5n
+f1bR4BFYIOTjDr2JfJavaHeKMtTN9uX3I4OPFSF/7uVSSAa1FL3igc8DUyKJASIE
+EAECAAwFAlvLwaoFAwASdQAACgkQlxC4m8pXrXxyPwf+OkeII9AagEl+cItCSeRz
+LCsd6MubGaWIjx7fxXKxZn5jyLPv5yrAOT+kJFrKvH3NNTa2hMEEfMnN/vOv/uGq
+IARqC/jGChNrNMR/y1RCS3wPNh2DpOmZvjvpynBsyx9GyruAjOszriKxmiFCTKVP
+S1LSlSGKOrrbhhXMAwYUTN8++NbDhySqLUzuZos3QstP0g1up5xz8exiVDzhYv0L
+5rPmlKAjMv1AC3b2uK4+xs8l717/Os71rGm3YUiypXUL6dk2IP8VMCxE4eXHtGf4
+nPWkJgRvjnNih8Bs3f89Fnqp3W07vVCf9Y2iJJO0yPpxj69oLSCeSEeYTP/cWP5X
+I4kBIgQQAQIADAUCW92N6wUDABJ1AAAKCRCXELibyletfCvfB/9OR0bOqWXpX1h3
+rJeloUbz84rpP+H6cZ9g1u3GOfxfIUL0r0vRVmOjimhlD+FSUGeohHIJkRVVO8sg
+VnxqotEulbpQMSfIuaoJBT7S8rld79E/qmw5WLJppLc6x4iKOiFV4AL1hdjKYaCe
+732wczZ70xP+IkPyYIkQyFEQ44c5tGKBmKqcE+wUF6Ftf1gtOjsus57NcjEoIcEq
+3fyS7JSKSeLqnvfAWjR6qtp4/h7NU7Eo/KjSdonRptl3gbZRFabxEzcwD2GyAdpB
+Gz966C/P7PKft+I2UKUVpdITVSfq0kBGwc4sH60sotyBfuhtsudgjA+g2gVuKPfr
+Q+H2TRFmiQEiBBABAgAMBQJb71ouBQMAEnUAAAoJEJcQuJvKV61813YH/0ijiUe8
+fUkyFJ/m1Q4pf79u04F9+ZFqPPbyy9YSZtc23YptpKanT0KOxR6i5Y9pvZCzo2fi
+Rp/eBCWn9mtBx6RK+bGzIriBQcutrSMTSOmSsqIVXrBMQr/ub22U63d2Y5U23TaG
+vG0JJVp0fzKM77zqSPD0FNo/WutgfYtJ3zYRfIQXYiPjonpgDFA9JLtWeJRXnUpp
+t+e3mnCbn461mMsuJIc1UpAo1BiPFkjZ3hTE7ncQ2iF7foMg0jTESJEx+480GGTp
+zLO1S70c8wB1MwMx1kbuhq8QTO4DfyVr+Nb9+jAEh03NiwOGZk3GlmGSSnUGCMv/
+K3rsHdw6fLpGXhGJAjMEEAEKAB0WIQQOBSAoK/MosSg1jbZHfKAd9XPUpgUCXAFf
+uwAKCRBHfKAd9XPUpi+tD/sFtxSBN7tvIzcIHwQ+/bPaJPjXBMwA7fY5Q40McdCp
+PldA90qKhfUeI3wRBOIuhDFGkLiURWfYJrphjUx45nBvWLeZDe5iDh31aCXUpk6f
+VVGVgJLOGRTVORlcJpY96gvqTbcks9rt6Px/W4f7b1WIQsBNiBkdaSFCuaZ2XPby
+PWDL/94PXbPzCXGB5fJnfTq3EhOt2tUEO0L1R7fxRWnITDKKdIPdf7ht961W5y85
+LmOV96ZYjh98BcJQONVvxsScBom9Nvl9hDi8lSrmzVwCrBG/IcXEoi+CifHeF0Vn
+qkRt3Pv/X7XO6MYf/poMIK9fl0lD1JzOJ+1WypjQ40K7Uhl/t/ojaR/wq6fE/6RV
+c8u14aSpDpPZ8HsW5oE9yAkeOxUnrxEDYRmbsoJ0WnGbTdpe2qVBQuOb2fVkgSwV
+nPW/BbYQ4xvEemZas6htZ9C2IaZmYMg1yfoFYO1BTcPxdY1DAQ5lnXJ9yNAs3BL5
+nzFL53RJ4ED/bSqJSCCtb/avN3JF4sxUYJ6f2GXybPM2tm0EMncAm/JIssArBdUY
++O2ejV/+VvOkULZPCsNRsCSk+Ir2vzLSTf2astP55MFX+0iKBSsLOtRthsv6Vb1R
+0FudUELOwjsPvbW7OnmBvgI1kluCVkXqY27pkdtiBCi6viIkLMcbr2zIXX9IsV2Z
+zrkBDQRaqmHJAQgA1NmN3csmD3B2qK/DbE5J5sDmQnTm/orxj3F8t27fZ4575bic
+lxE5RuQ6dvXWIeT42AspL3I5MfiPqAzjPANOkTXHmHAVG+eLVqzKGkD+vw/H0/9a
+L10nwUuL3+HJotW6DE1rwYsdvPn3/g14WTUPbxwD9YZ1vrQqUOpWIY2VbnJgb4Gv
+RfVrm45oGg0dOMWOnMczqa+xawOOfV3Cn9HNTqkQSL/Me8OLaGrAMutl8wceMJLB
+x47NzlRHwWb8x/H16lvjcUyqetkr3BGuVoPUi0KWzb15/e3pN54vaQ6jTlShup2x
+01vKsOEjqLnp7IdmWEdGCX6xaSN2+lDqYxB6hwARAQABiQEfBBgBAgAJBQJaqmHJ
+AhsMAAoJENZUKX2uXgzKcFsH/iJNgXZXE1dWCPsIj+vjunRTNZHAeQCFgsJrKt2i
+isERCC8kSSVs2DkGggokcKPu+0zDb63dKunqussHyPYrx+ml3xklAV3fZrG8T/fz
+IDm9Mj6w/8PyMkYLTXp916lAiqdcc3C5XKYlwcdnqUYopTldhDhVcOzbWRiCoMZw
+p4m604rpOY05BobGKqiRFk9pinTf166wbaIqKgR26sxMf7cTFeNwCBUI3xW4Hsw+
+bsLN5kgGd7J8dlBP35a6S8fADwG3lSxa4yqifJza0AxyeNGxPlOLi+BRtHbAruF4
+dwEcCExwUQ/PAR289cGJPliPZkVHDckoqMVDZfWzdHMxXSo=
+=/R9/
+-----END PGP PUBLIC KEY BLOCK-----


[drill] 05/11: DRILL-7450: Improve performance for ANALYZE command

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit 20293b63c0bb559ae35d57f7cb1ab7fa24e9ee6d
Author: Volodymyr Vysotskyi <vv...@gmail.com>
AuthorDate: Fri Nov 22 19:53:08 2019 +0200

    DRILL-7450: Improve performance for ANALYZE command
    
    - Implement two-phase aggregation for the lowest metadata aggregate to optimize performance
    - Allow using complex functions with hash aggregate
    - Use hash aggregation for PHASE_1of2 for ANALYZE to reduce memory usage and avoid sorting non-aggregated data
    - Add sort above hash aggregation to fix correctness of merge exchange and stream aggregate
    
    closes #1907
---
 docs/dev/MetastoreAnalyze.md                       |  61 ++++-
 .../org/apache/drill/exec/expr/IsPredicate.java    |   2 +-
 .../drill/exec/expr/fn/DrillAggFuncHolder.java     |   4 +-
 .../expr/fn/DrillComplexWriterAggFuncHolder.java   |  40 ++-
 .../apache/drill/exec/expr/fn/DrillFuncHolder.java |  78 ++++--
 .../drill/exec/metastore/ColumnNamesOptions.java   |  80 ++++++
 .../metastore/analyze/AnalyzeFileInfoProvider.java |  24 +-
 .../metastore/analyze/AnalyzeInfoProvider.java     |  20 +-
 .../analyze/AnalyzeParquetInfoProvider.java        |  18 +-
 .../analyze/FileMetadataInfoCollector.java         |   3 +-
 .../analyze/MetadataAggregateContext.java          |  24 ++
 .../base/AbstractGroupScanWithMetadata.java        |  36 ++-
 .../exec/physical/config/HashToMergeExchange.java  |   3 +-
 ...MetadataAggPOP.java => MetadataHashAggPOP.java} |  26 +-
 ...tadataAggPOP.java => MetadataStreamAggPOP.java} |  19 +-
 .../exec/physical/impl/aggregate/HashAggBatch.java | 136 ++++++---
 .../physical/impl/aggregate/HashAggTemplate.java   |  15 +-
 .../physical/impl/aggregate/StreamingAggBatch.java |  11 +-
 .../physical/impl/flatten/FlattenRecordBatch.java  |   2 +-
 ...aAggBatch.java => MetadataAggregateHelper.java} | 173 +++++++-----
 .../impl/metadata/MetadataControllerBatch.java     |  44 +--
 .../impl/metadata/MetadataHandlerBatch.java        |  49 ++--
 .../impl/metadata/MetadataHashAggBatch.java        |  56 ++++
 ...eator.java => MetadataHashAggBatchCreator.java} |   8 +-
 .../impl/metadata/MetadataStreamAggBatch.java      |  62 +++++
 ...tor.java => MetadataStreamAggBatchCreator.java} |   8 +-
 .../physical/impl/project/ProjectRecordBatch.java  |   4 +-
 .../physical/impl/validate/BatchValidator.java     |   8 +
 .../resultSet/model/single/BaseReaderBuilder.java  |   2 +-
 .../exec/physical/rowSet/RowSetFormatter.java      |  13 +-
 .../apache/drill/exec/planner/PlannerPhase.java    |   2 +
 .../logical/ConvertCountToDirectScanRule.java      |   2 +-
 .../ConvertMetadataAggregateToDirectScanRule.java  | 271 ++++++++++++++++++
 .../planner/physical/DrillDistributionTrait.java   |  81 +++++-
 .../drill/exec/planner/physical/HashAggPrule.java  |  14 +-
 .../drill/exec/planner/physical/HashPrelUtil.java  |  37 +--
 .../exec/planner/physical/MetadataAggPrule.java    | 202 ++++++++++++--
 .../planner/physical/MetadataHandlerPrule.java     |   2 +-
 ...tadataAggPrel.java => MetadataHashAggPrel.java} |  22 +-
 ...dataAggPrel.java => MetadataStreamAggPrel.java} |  21 +-
 .../drill/exec/planner/physical/PrelUtil.java      |  27 +-
 .../sql/handlers/MetastoreAnalyzeTableHandler.java |  42 +--
 .../apache/drill/exec/store/ColumnExplorer.java    |  25 +-
 .../store/parquet/AbstractParquetGroupScan.java    |  98 +++++--
 .../ParquetFileTableMetadataProviderBuilder.java   |   4 +
 .../exec/store/pojo/DynamicPojoRecordReader.java   |  25 +-
 .../drill/exec/fn/impl/TestAggregateFunction.java  |   2 +-
 .../drill/exec/fn/impl/TestAggregateFunctions.java | 206 ++++++++++----
 .../physical/impl/agg/TestAggWithAnyValue.java     | 304 ++++++++++++++++-----
 .../physical/impl/agg/TestHashAggEmitOutcome.java  | 205 +++++++-------
 .../drill/exec/sql/TestMetastoreCommands.java      | 285 +++++++++++++++----
 .../test/resources/functions/test_covariance.json  |   6 +-
 .../resources/functions/test_logical_aggr.json     |   6 +-
 .../src/main/codegen/templates/ComplexWriters.java |   6 +-
 .../expression/FunctionHolderExpression.java       |   2 +-
 .../common/logical/data/MetadataAggregate.java     |   2 +-
 56 files changed, 2225 insertions(+), 703 deletions(-)

diff --git a/docs/dev/MetastoreAnalyze.md b/docs/dev/MetastoreAnalyze.md
index ec27c50..23b2b32 100644
--- a/docs/dev/MetastoreAnalyze.md
+++ b/docs/dev/MetastoreAnalyze.md
@@ -94,24 +94,63 @@ Analyze command specific operators:
  - `MetadataControllerBatch` - responsible for converting obtained metadata, fetching absent metadata from the Metastore
   and storing resulting metadata into the Metastore.
 
-`MetastoreAnalyzeTableHandler` forms plan  depending on segments count in the following form:
+`MetastoreAnalyzeTableHandler` forms plan depending on segments count in the following form:
 
 ```
-MetadataControllerBatch
+MetadataControllerRel
   ...
-    MetadataHandlerBatch
-      MetadataAggBatch(dir0, ...)
-        MetadataHandlerBatch
-          MetadataAggBatch(dir0, dir1, ...)
-            MetadataHandlerBatch
-              MetadataAggBatch(dir0, dir1, fqn, ...)
-                Scan(DYNAMIC_STAR **, ANY fqn, ...)
+    MetadataHandlerRel
+      MetadataAggRel(dir0, ...)
+        MetadataHandlerRel
+          MetadataAggRel(dir0, dir1, ...)
+            MetadataHandlerRel
+              MetadataAggRel(dir0, dir1, fqn, ...)
+                DrillScanRel(DYNAMIC_STAR **, ANY fqn, ...)
 ```
 
-The lowest `MetadataAggBatch` creates required aggregate calls for every (or interesting only) table columns
+For the case when `ANALYZE` uses columns for which statistics is present in parquet metadata,
+`ConvertMetadataAggregateToDirectScanRule` rule will be applied to the 
+
+```
+MetadataAggRel(dir0, dir1, fqn, ...)
+  DrillScanRel(DYNAMIC_STAR **, ANY fqn, ...)
+```
+
+plan part and convert it to the `DrillDirectScanRel` populated with row group metadata for the case when `ANALYZE`
+was done for `ROW_GROUP` metadata level.
+For the case when metadata level in `ANALYZE` is not `ROW_GROUP`, the plan above will be converted into the following plan:
+
+```
+MetadataAggRel(metadataLevel=FILE (or another non-ROW_GROUP value), createNewAggregations=false)
+  DrillDirectScanRel
+```
+
+When it is converted into the physical plan, two-phase aggregation may be used for the case when incoming row
+count is greater than `planner.slice_target` option value. In this case, the lowest aggregation will be hash
+aggregation and it will be executed on the same minor fragments where the scan is produced. `Sort` operator will be
+placed above hash aggregation. `HashToMergeExchange` operator above `Sort` will send aggregated sorted data to the
+stream aggregate above.
+
+Example of the resulting plan:
+
+```
+MetadataControllerPrel
+  ...
+    MetadataStreamAggPrel(PHASE_1of1)
+      SortPrel
+        MetadataHandlerPrel
+          MetadataStreamAggPrel(PHASE_2of2)
+            HashToMergeExchangePrel
+              SortPrel
+                MetadataHashAggPrel(PHASE_1of2)
+                  ScanPrel
+```
+
+The lowest `MetadataStreamAggBatch` (or `MetadataHashAggBatch` for the case of two-phase aggregation with
+`MetadataStreamAggBatch` above) creates required aggregate calls for every (or interesting only) table columns
 and produces aggregations with grouping by segment columns that correspond to specific table level.
 `MetadataHandlerBatch` above it populates batch with additional information about metadata type and other info.
-`MetadataAggBatch` above merges metadata calculated before to obtain metadata for parent metadata levels and also stores incoming data to populate it to the Metastore later.
+`MetadataStreamAggBatch` above merges metadata calculated before to obtain metadata for parent metadata levels and also stores incoming data to populate it to the Metastore later.
 
 `MetadataControllerBatch` obtains all calculated metadata, converts it to the suitable form and sends it to the Metastore.
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/IsPredicate.java b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/IsPredicate.java
index 21e012e..aa0b9fd 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/IsPredicate.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/IsPredicate.java
@@ -71,7 +71,7 @@ public class IsPredicate<C extends Comparable<C>> extends LogicalExpressionBase
    * @param stat statistics object
    * @return <tt>true</tt> if the input stat object is null or has invalid statistics; false otherwise
    */
-  static boolean isNullOrEmpty(ColumnStatistics stat) {
+  public static boolean isNullOrEmpty(ColumnStatistics stat) {
     return stat == null
         || !stat.contains(ColumnStatisticsKind.MIN_VALUE)
         || !stat.contains(ColumnStatisticsKind.MAX_VALUE)
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillAggFuncHolder.java b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillAggFuncHolder.java
index aac0abb..6203cd8 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillAggFuncHolder.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillAggFuncHolder.java
@@ -213,9 +213,7 @@ class DrillAggFuncHolder extends DrillFuncHolder {
         declareVarArgArray(g.getModel(), sub, inputVariables);
       }
       for (int i = 0; i < inputVariables.length; i++) {
-        ValueReference parameter = getAttributeParameter(i);
-        HoldingContainer inputVariable = inputVariables[i];
-        declare(sub, parameter, inputVariable.getHolder().type(), inputVariable.getHolder(), i);
+        declare(g.getModel(), sub, inputVariables[i], i);
       }
     }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillComplexWriterAggFuncHolder.java b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillComplexWriterAggFuncHolder.java
index 8439983..eec5848 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillComplexWriterAggFuncHolder.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillComplexWriterAggFuncHolder.java
@@ -23,6 +23,8 @@ import org.apache.drill.common.expression.FunctionHolderExpression;
 import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.exec.expr.ClassGenerator;
 import org.apache.drill.exec.expr.ClassGenerator.HoldingContainer;
+import org.apache.drill.exec.physical.impl.aggregate.HashAggBatch;
+import org.apache.drill.exec.physical.impl.aggregate.HashAggTemplate;
 import org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch;
 import org.apache.drill.exec.physical.impl.aggregate.StreamingAggTemplate;
 import org.apache.drill.exec.record.VectorAccessibleComplexWriter;
@@ -56,10 +58,18 @@ public class DrillComplexWriterAggFuncHolder extends DrillAggFuncHolder {
 
   @Override
   public JVar[] renderStart(ClassGenerator<?> classGenerator, HoldingContainer[] inputVariables, FieldReference fieldReference) {
-    if (!classGenerator.getMappingSet().isHashAggMapping()) {  //Declare workspace vars for non-hash-aggregation.
-      JInvocation container = classGenerator.getMappingSet().getOutgoing().invoke("getOutgoingContainer");
+    JInvocation container = classGenerator.getMappingSet().getOutgoing().invoke("getOutgoingContainer");
 
-      complexWriter = classGenerator.declareClassField("complexWriter", classGenerator.getModel()._ref(ComplexWriter.class));
+    complexWriter = classGenerator.declareClassField("complexWriter", classGenerator.getModel()._ref(ComplexWriter.class));
+
+    if (classGenerator.getMappingSet().isHashAggMapping()) {
+      // Default name is "col", if not passed in a reference name for the output vector.
+      String refName = fieldReference == null ? "col" : fieldReference.getRootSegment().getPath();
+      JClass cwClass = classGenerator.getModel().ref(VectorAccessibleComplexWriter.class);
+      classGenerator.getSetupBlock().assign(complexWriter, cwClass.staticInvoke("getWriter").arg(refName).arg(container));
+
+      return super.renderStart(classGenerator, inputVariables, fieldReference);
+    } else {  //Declare workspace vars for non-hash-aggregation.
       writerIdx = classGenerator.declareClassField("writerIdx", classGenerator.getModel()._ref(int.class));
       lastWriterIdx = classGenerator.declareClassField("lastWriterIdx", classGenerator.getModel()._ref(int.class));
       //Default name is "col", if not passed in a reference name for the output vector.
@@ -72,8 +82,6 @@ public class DrillComplexWriterAggFuncHolder extends DrillAggFuncHolder {
       JVar[] workspaceJVars = declareWorkspaceVariables(classGenerator);
       generateBody(classGenerator, ClassGenerator.BlockType.SETUP, setup(), null, workspaceJVars, true);
       return workspaceJVars;
-    } else {
-      return super.renderStart(classGenerator, inputVariables, fieldReference);
     }
   }
 
@@ -84,26 +92,33 @@ public class DrillComplexWriterAggFuncHolder extends DrillAggFuncHolder {
         getRegisteredNames()[0]));
 
     JBlock sub = new JBlock(true, true);
-    JBlock topSub = sub;
     JClass aggBatchClass = null;
 
     if (classGenerator.getCodeGenerator().getDefinition() == StreamingAggTemplate.TEMPLATE_DEFINITION) {
       aggBatchClass = classGenerator.getModel().ref(StreamingAggBatch.class);
+    } else if (classGenerator.getCodeGenerator().getDefinition() == HashAggTemplate.TEMPLATE_DEFINITION) {
+      aggBatchClass = classGenerator.getModel().ref(HashAggBatch.class);
     }
-    assert aggBatchClass != null : "ComplexWriterAggFuncHolder should only be used with Streaming Aggregate Operator";
 
     JExpression aggBatch = JExpr.cast(aggBatchClass, classGenerator.getMappingSet().getOutgoing());
 
     classGenerator.getSetupBlock().add(aggBatch.invoke("addComplexWriter").arg(complexWriter));
     // Only set the writer if there is a position change. Calling setPosition may cause underlying writers to allocate
     // new vectors, thereby, losing the previously stored values
-    JBlock condAssignCW = classGenerator.getEvalBlock()._if(lastWriterIdx.ne(writerIdx))._then();
-    condAssignCW.add(complexWriter.invoke("setPosition").arg(writerIdx));
-    condAssignCW.assign(lastWriterIdx, writerIdx);
+    if (classGenerator.getMappingSet().isHashAggMapping()) {
+      classGenerator.getEvalBlock().add(
+          complexWriter
+              .invoke("setPosition")
+              .arg(classGenerator.getMappingSet().getWorkspaceIndex()));
+    } else {
+      JBlock condAssignCW = classGenerator.getEvalBlock()._if(lastWriterIdx.ne(writerIdx))._then();
+      condAssignCW.add(complexWriter.invoke("setPosition").arg(writerIdx));
+      condAssignCW.assign(lastWriterIdx, writerIdx);
+    }
     sub.decl(classGenerator.getModel()._ref(ComplexWriter.class), getReturnValue().getName(), complexWriter);
 
     // add the subblock after the out declaration.
-    classGenerator.getEvalBlock().add(topSub);
+    classGenerator.getEvalBlock().add(sub);
 
     addProtectedBlock(classGenerator, sub, add(), inputVariables, workspaceJVars, false);
     classGenerator.getEvalBlock().directStatement(String.format("//---- end of eval portion of %s function. ----//",
@@ -124,7 +139,8 @@ public class DrillComplexWriterAggFuncHolder extends DrillAggFuncHolder {
           JExpr._new(classGenerator.getHolderType(getReturnType())));
     }
     classGenerator.getEvalBlock().add(sub);
-    if (getReturnType().getMinorType() == TypeProtos.MinorType.LATE) {
+    if (getReturnType().getMinorType() == TypeProtos.MinorType.LATE
+        && !classGenerator.getMappingSet().isHashAggMapping()) {
       sub.assignPlus(writerIdx, JExpr.lit(1));
     }
     addProtectedBlock(classGenerator, sub, output(), null, workspaceJVars, false);
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFuncHolder.java b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFuncHolder.java
index 4d758f8..91f7cea 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFuncHolder.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/DrillFuncHolder.java
@@ -218,43 +218,21 @@ public abstract class DrillFuncHolder extends AbstractFuncHolder {
         if (decConstInputOnly && !inputVariables[i].isConstant()) {
           continue;
         }
-
-        ValueReference parameter = getAttributeParameter(i);
-        HoldingContainer inputVariable = inputVariables[i];
-        if (parameter.isFieldReader() && ! inputVariable.isReader()
-            && ! Types.isComplex(inputVariable.getMajorType()) && inputVariable.getMinorType() != MinorType.UNION) {
-          JType singularReaderClass = g.getModel()._ref(TypeHelper.getHolderReaderImpl(inputVariable.getMajorType().getMinorType(),
-              inputVariable.getMajorType().getMode()));
-          JType fieldReadClass = getParamClass(g.getModel(), parameter, inputVariable.getHolder().type());
-          JInvocation reader = JExpr._new(singularReaderClass).arg(inputVariable.getHolder());
-          declare(sub, parameter, fieldReadClass, reader, i);
-        } else if (!parameter.isFieldReader() && inputVariable.isReader() && Types.isComplex(parameter.getType())) {
-          // For complex data-types (repeated maps/lists/dicts) the input to the aggregate will be a FieldReader. However, aggregate
-          // functions like ANY_VALUE, will assume the input to be a RepeatedMapHolder etc. Generate boilerplate code, to map
-          // from FieldReader to respective Holder.
-          if (Types.isComplex(parameter.getType())) {
-            JType holderClass = getParamClass(g.getModel(), parameter, inputVariable.getHolder().type());
-            JAssignmentTarget holderVar = declare(sub, parameter, holderClass, JExpr._new(holderClass), i);
-            sub.assign(holderVar.ref("reader"), inputVariable.getHolder());
-          }
-        } else {
-          JExpression exprToAssign = inputVariable.getHolder();
-          if (parameter.isVarArg() && parameter.isFieldReader() && Types.isUnion(inputVariable.getMajorType())) {
-            exprToAssign = exprToAssign.ref("reader");
-          }
-          declare(sub, parameter, inputVariable.getHolder().type(), exprToAssign, i);
-        }
+        declare(g.getModel(), sub, inputVariables[i], i);
       }
     }
 
     JVar[] internalVars = new JVar[workspaceJVars.length];
     for (int i = 0; i < workspaceJVars.length; i++) {
       if (decConstInputOnly) {
-        internalVars[i] = sub.decl(g.getModel()._ref(attributes.getWorkspaceVars()[i].getType()), attributes.getWorkspaceVars()[i].getName(), workspaceJVars[i]);
+        internalVars[i] = sub.decl(
+            g.getModel()._ref(attributes.getWorkspaceVars()[i].getType()),
+            attributes.getWorkspaceVars()[i].getName(), workspaceJVars[i]);
       } else {
-        internalVars[i] = sub.decl(g.getModel()._ref(attributes.getWorkspaceVars()[i].getType()), attributes.getWorkspaceVars()[i].getName(), workspaceJVars[i]);
+        internalVars[i] = sub.decl(
+            g.getModel()._ref(attributes.getWorkspaceVars()[i].getType()),
+            attributes.getWorkspaceVars()[i].getName(), workspaceJVars[i]);
       }
-
     }
 
     Preconditions.checkNotNull(body);
@@ -267,6 +245,48 @@ public abstract class DrillFuncHolder extends AbstractFuncHolder {
   }
 
   /**
+   * Declares attribute parameter which corresponds to specified {@code currentIndex}
+   * in specified {@code jBlock} considering its type.
+   *
+   * @param model         code model to generate the code
+   * @param jBlock        block of code to be populated
+   * @param inputVariable input variable for current function
+   * @param currentIndex  index of current parameter
+   */
+  protected void declare(JCodeModel model, JBlock jBlock,
+      HoldingContainer inputVariable, int currentIndex) {
+    ValueReference parameter = getAttributeParameter(currentIndex);
+    if (parameter.isFieldReader()
+        && !inputVariable.isReader()
+        && !Types.isComplex(inputVariable.getMajorType())
+        && inputVariable.getMinorType() != MinorType.UNION) {
+      JType singularReaderClass = model._ref(
+          TypeHelper.getHolderReaderImpl(inputVariable.getMajorType().getMinorType(),
+          inputVariable.getMajorType().getMode()));
+      JType fieldReadClass = getParamClass(model, parameter, inputVariable.getHolder().type());
+      JInvocation reader = JExpr._new(singularReaderClass).arg(inputVariable.getHolder());
+      declare(jBlock, parameter, fieldReadClass, reader, currentIndex);
+    } else if (!parameter.isFieldReader()
+        && inputVariable.isReader()
+        && Types.isComplex(parameter.getType())) {
+      // For complex data-types (repeated maps/lists/dicts) the input to the aggregate will be a FieldReader. However, aggregate
+      // functions like ANY_VALUE, will assume the input to be a RepeatedMapHolder etc. Generate boilerplate code, to map
+      // from FieldReader to respective Holder.
+      if (Types.isComplex(parameter.getType())) {
+        JType holderClass = getParamClass(model, parameter, inputVariable.getHolder().type());
+        JAssignmentTarget holderVar = declare(jBlock, parameter, holderClass, JExpr._new(holderClass), currentIndex);
+        jBlock.assign(holderVar.ref("reader"), inputVariable.getHolder());
+      }
+    } else {
+      JExpression exprToAssign = inputVariable.getHolder();
+      if (parameter.isVarArg() && parameter.isFieldReader() && Types.isUnion(inputVariable.getMajorType())) {
+        exprToAssign = exprToAssign.ref("reader");
+      }
+      declare(jBlock, parameter, inputVariable.getHolder().type(), exprToAssign, currentIndex);
+    }
+  }
+
+  /**
    * Declares array for storing vararg function arguments.
    *
    * @param model          code model to generate the code
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/ColumnNamesOptions.java b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/ColumnNamesOptions.java
new file mode 100644
index 0000000..0b9faca
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/ColumnNamesOptions.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.metastore;
+
+import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.server.options.OptionManager;
+
+import java.util.StringJoiner;
+
+/**
+ * Holds system / session options that are used for obtaining partition / implicit / special column names.
+ */
+public class ColumnNamesOptions {
+  private final String fullyQualifiedName;
+  private final String partitionColumnNameLabel;
+  private final String rowGroupIndex;
+  private final String rowGroupStart;
+  private final String rowGroupLength;
+  private final String lastModifiedTime;
+
+  public ColumnNamesOptions(OptionManager optionManager) {
+    this.fullyQualifiedName = optionManager.getOption(ExecConstants.IMPLICIT_FQN_COLUMN_LABEL).string_val;
+    this.partitionColumnNameLabel = optionManager.getOption(ExecConstants.FILESYSTEM_PARTITION_COLUMN_LABEL).string_val;
+    this.rowGroupIndex = optionManager.getOption(ExecConstants.IMPLICIT_ROW_GROUP_INDEX_COLUMN_LABEL).string_val;
+    this.rowGroupStart = optionManager.getOption(ExecConstants.IMPLICIT_ROW_GROUP_START_COLUMN_LABEL).string_val;
+    this.rowGroupLength = optionManager.getOption(ExecConstants.IMPLICIT_ROW_GROUP_LENGTH_COLUMN_LABEL).string_val;
+    this.lastModifiedTime = optionManager.getOption(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL).string_val;
+  }
+
+  public String partitionColumnNameLabel() {
+    return partitionColumnNameLabel;
+  }
+
+  public String fullyQualifiedName() {
+    return fullyQualifiedName;
+  }
+
+  public String rowGroupIndex() {
+    return rowGroupIndex;
+  }
+
+  public String rowGroupStart() {
+    return rowGroupStart;
+  }
+
+  public String rowGroupLength() {
+    return rowGroupLength;
+  }
+
+  public String lastModifiedTime() {
+    return lastModifiedTime;
+  }
+
+  @Override
+  public String toString() {
+    return new StringJoiner(", ", ColumnNamesOptions.class.getSimpleName() + "[", "]")
+        .add("fullyQualifiedName='" + fullyQualifiedName + "'")
+        .add("partitionColumnNameLabel='" + partitionColumnNameLabel + "'")
+        .add("rowGroupIndex='" + rowGroupIndex + "'")
+        .add("rowGroupStart='" + rowGroupStart + "'")
+        .add("rowGroupLength='" + rowGroupLength + "'")
+        .add("lastModifiedTime='" + lastModifiedTime + "'")
+        .toString();
+  }
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeFileInfoProvider.java b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeFileInfoProvider.java
index 0371cc2..a4bf0ad 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeFileInfoProvider.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeFileInfoProvider.java
@@ -18,17 +18,14 @@
 package org.apache.drill.exec.metastore.analyze;
 
 import org.apache.calcite.rel.core.TableScan;
-import org.apache.calcite.sql.SqlIdentifier;
-import org.apache.calcite.sql.parser.SqlParserPos;
 import org.apache.drill.common.expression.ExpressionPosition;
 import org.apache.drill.common.expression.FieldReference;
 import org.apache.drill.common.expression.FunctionCall;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.logical.data.NamedExpression;
-import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.planner.logical.DrillTable;
 import org.apache.drill.exec.planner.physical.PlannerSettings;
-import org.apache.drill.exec.server.options.OptionManager;
 import org.apache.drill.exec.store.ColumnExplorer;
 import org.apache.drill.exec.store.dfs.FileSelection;
 import org.apache.drill.exec.store.dfs.FormatSelection;
@@ -37,7 +34,7 @@ import org.apache.drill.metastore.metadata.MetadataType;
 import org.apache.drill.metastore.metadata.TableInfo;
 
 import java.io.IOException;
-import java.util.Arrays;
+import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
 import java.util.function.Supplier;
@@ -49,7 +46,7 @@ import java.util.stream.Collectors;
 public abstract class AnalyzeFileInfoProvider implements AnalyzeInfoProvider {
 
   @Override
-  public List<SchemaPath> getSegmentColumns(DrillTable table, OptionManager options) throws IOException {
+  public List<SchemaPath> getSegmentColumns(DrillTable table, ColumnNamesOptions columnNamesOptions) throws IOException {
     FormatSelection selection = (FormatSelection) table.getSelection();
 
     FileSelection fileSelection = selection.getSelection();
@@ -57,16 +54,17 @@ public abstract class AnalyzeFileInfoProvider implements AnalyzeInfoProvider {
       fileSelection = FileMetadataInfoCollector.getExpandedFileSelection(fileSelection);
     }
 
-    return ColumnExplorer.getPartitionColumnNames(fileSelection, options).stream()
+    return ColumnExplorer.getPartitionColumnNames(fileSelection, columnNamesOptions).stream()
         .map(SchemaPath::getSimplePath)
         .collect(Collectors.toList());
   }
 
   @Override
-  public List<SqlIdentifier> getProjectionFields(MetadataType metadataLevel, OptionManager options) {
-    return Arrays.asList(
-        new SqlIdentifier(options.getString(ExecConstants.IMPLICIT_FQN_COLUMN_LABEL), SqlParserPos.ZERO),
-        new SqlIdentifier(options.getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL), SqlParserPos.ZERO));
+  public List<SchemaPath> getProjectionFields(DrillTable table, MetadataType metadataLevel, ColumnNamesOptions columnNamesOptions) throws IOException {
+    List<SchemaPath> projectionList = new ArrayList<>(getSegmentColumns(table, columnNamesOptions));
+    projectionList.add(SchemaPath.getSimplePath(columnNamesOptions.fullyQualifiedName()));
+    projectionList.add(SchemaPath.getSimplePath(columnNamesOptions.lastModifiedTime()));
+    return Collections.unmodifiableList(projectionList);
   }
 
   @Override
@@ -78,8 +76,8 @@ public abstract class AnalyzeFileInfoProvider implements AnalyzeInfoProvider {
   }
 
   @Override
-  public SchemaPath getLocationField(OptionManager optionManager) {
-    return SchemaPath.getSimplePath(optionManager.getString(ExecConstants.IMPLICIT_FQN_COLUMN_LABEL));
+  public SchemaPath getLocationField(ColumnNamesOptions columnNamesOptions) {
+    return SchemaPath.getSimplePath(columnNamesOptions.fullyQualifiedName());
   }
 
   @Override
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeInfoProvider.java b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeInfoProvider.java
index 49b8430..88965c3 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeInfoProvider.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeInfoProvider.java
@@ -18,13 +18,12 @@
 package org.apache.drill.exec.metastore.analyze;
 
 import org.apache.calcite.rel.core.TableScan;
-import org.apache.calcite.sql.SqlIdentifier;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.logical.data.NamedExpression;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.physical.base.GroupScan;
 import org.apache.drill.exec.planner.logical.DrillTable;
 import org.apache.drill.exec.planner.physical.PlannerSettings;
-import org.apache.drill.exec.server.options.OptionManager;
 import org.apache.drill.exec.store.dfs.FormatSelection;
 import org.apache.drill.metastore.components.tables.BasicTablesRequests;
 import org.apache.drill.metastore.metadata.MetadataType;
@@ -42,20 +41,21 @@ public interface AnalyzeInfoProvider {
   /**
    * Returns list of segment column names for specified {@link DrillTable} table.
    *
-   * @param table   table for which should be returned segment column names
-   * @param options option manager
+   * @param table              table for which should be returned segment column names
+   * @param columnNamesOptions column names option values
    * @return list of segment column names
    */
-  List<SchemaPath> getSegmentColumns(DrillTable table, OptionManager options) throws IOException;
+  List<SchemaPath> getSegmentColumns(DrillTable table, ColumnNamesOptions columnNamesOptions) throws IOException;
 
   /**
    * Returns list of fields required for ANALYZE.
    *
-   * @param metadataLevel metadata level for analyze
-   * @param options       option manager
+   * @param table              drill table
+   * @param metadataLevel      metadata level for analyze
+   * @param columnNamesOptions column names option values
    * @return list of fields required for ANALYZE
    */
-  List<SqlIdentifier> getProjectionFields(MetadataType metadataLevel, OptionManager options);
+  List<SchemaPath> getProjectionFields(DrillTable table, MetadataType metadataLevel, ColumnNamesOptions columnNamesOptions) throws IOException;
 
   /**
    * Returns {@link MetadataInfoCollector} instance for obtaining information about segments, files, etc.
@@ -79,10 +79,10 @@ public interface AnalyzeInfoProvider {
    * Provides schema path to field which will be used as a location for specific table data,
    * for example, for file-based tables, it may be `fqn`.
    *
-   * @param optionManager option manager
+   * @param columnNamesOptions column names option values
    * @return location field
    */
-  SchemaPath getLocationField(OptionManager optionManager);
+  SchemaPath getLocationField(ColumnNamesOptions columnNamesOptions);
 
   /**
    * Returns expression which may be used to determine parent location for specific table data,
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeParquetInfoProvider.java b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeParquetInfoProvider.java
index 4ce1424..f48e9cc 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeParquetInfoProvider.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/AnalyzeParquetInfoProvider.java
@@ -17,14 +17,14 @@
  */
 package org.apache.drill.exec.metastore.analyze;
 
-import org.apache.calcite.sql.SqlIdentifier;
-import org.apache.calcite.sql.parser.SqlParserPos;
-import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.physical.base.GroupScan;
-import org.apache.drill.exec.server.options.OptionManager;
+import org.apache.drill.exec.planner.logical.DrillTable;
 import org.apache.drill.exec.store.parquet.ParquetGroupScan;
 import org.apache.drill.metastore.metadata.MetadataType;
 
+import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.List;
@@ -38,12 +38,12 @@ public class AnalyzeParquetInfoProvider extends AnalyzeFileInfoProvider {
   public static final String TABLE_TYPE_NAME = "PARQUET";
 
   @Override
-  public List<SqlIdentifier> getProjectionFields(MetadataType metadataLevel, OptionManager options) {
-    List<SqlIdentifier> columnList = new ArrayList<>(super.getProjectionFields(metadataLevel, options));
+  public List<SchemaPath> getProjectionFields(DrillTable table, MetadataType metadataLevel, ColumnNamesOptions columnNamesOptions) throws IOException {
+    List<SchemaPath> columnList = new ArrayList<>(super.getProjectionFields(table, metadataLevel, columnNamesOptions));
     if (metadataLevel.includes(MetadataType.ROW_GROUP)) {
-      columnList.add(new SqlIdentifier(options.getString(ExecConstants.IMPLICIT_ROW_GROUP_INDEX_COLUMN_LABEL), SqlParserPos.ZERO));
-      columnList.add(new SqlIdentifier(options.getString(ExecConstants.IMPLICIT_ROW_GROUP_START_COLUMN_LABEL), SqlParserPos.ZERO));
-      columnList.add(new SqlIdentifier(options.getString(ExecConstants.IMPLICIT_ROW_GROUP_LENGTH_COLUMN_LABEL), SqlParserPos.ZERO));
+      columnList.add(SchemaPath.getSimplePath(columnNamesOptions.rowGroupIndex()));
+      columnList.add(SchemaPath.getSimplePath(columnNamesOptions.rowGroupStart()));
+      columnList.add(SchemaPath.getSimplePath(columnNamesOptions.rowGroupLength()));
     }
     return Collections.unmodifiableList(columnList);
   }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/FileMetadataInfoCollector.java b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/FileMetadataInfoCollector.java
index e9882d1..d9765d6 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/FileMetadataInfoCollector.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/FileMetadataInfoCollector.java
@@ -154,7 +154,8 @@ public class FileMetadataInfoCollector implements MetadataInfoCollector {
     String selectionRoot = selection.getSelection().getSelectionRoot().toUri().getPath();
 
     if (!Objects.equals(metastoreInterestingColumns, interestingColumns)
-        && (metastoreInterestingColumns == null || !metastoreInterestingColumns.containsAll(interestingColumns))
+        && metastoreInterestingColumns != null &&
+        (interestingColumns == null || !metastoreInterestingColumns.containsAll(interestingColumns))
         || TableStatisticsKind.ANALYZE_METADATA_LEVEL.getValue(basicRequests.tableMetadata(tableInfo)).compareTo(metadataLevel) != 0) {
       // do not update table scan and lists of segments / files / row groups,
       // metadata should be recalculated
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/MetadataAggregateContext.java b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/MetadataAggregateContext.java
index 46f6df5..9108345 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/MetadataAggregateContext.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/metastore/analyze/MetadataAggregateContext.java
@@ -22,6 +22,7 @@ import com.fasterxml.jackson.databind.annotation.JsonDeserialize;
 import com.fasterxml.jackson.databind.annotation.JsonPOJOBuilder;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.logical.data.NamedExpression;
+import org.apache.drill.metastore.metadata.MetadataType;
 
 import java.util.List;
 import java.util.Objects;
@@ -36,12 +37,14 @@ public class MetadataAggregateContext {
   private final List<SchemaPath> interestingColumns;
   private final List<SchemaPath> excludedColumns;
   private final boolean createNewAggregations;
+  private final MetadataType metadataLevel;
 
   public MetadataAggregateContext(MetadataAggregateContextBuilder builder) {
     this.groupByExpressions = builder.groupByExpressions;
     this.interestingColumns = builder.interestingColumns;
     this.createNewAggregations = builder.createNewAggregations;
     this.excludedColumns = builder.excludedColumns;
+    this.metadataLevel = builder.metadataLevel;
   }
 
   @JsonProperty
@@ -64,6 +67,11 @@ public class MetadataAggregateContext {
     return excludedColumns;
   }
 
+  @JsonProperty
+  public MetadataType metadataLevel() {
+    return metadataLevel;
+  }
+
   @Override
   public String toString() {
     return new StringJoiner(",\n", MetadataAggregateContext.class.getSimpleName() + "[", "]")
@@ -78,11 +86,21 @@ public class MetadataAggregateContext {
     return new MetadataAggregateContextBuilder();
   }
 
+  public MetadataAggregateContextBuilder toBuilder() {
+    return new MetadataAggregateContextBuilder()
+        .groupByExpressions(groupByExpressions)
+        .interestingColumns(interestingColumns)
+        .createNewAggregations(createNewAggregations)
+        .excludedColumns(excludedColumns)
+        .metadataLevel(metadataLevel);
+  }
+
   @JsonPOJOBuilder(withPrefix = "")
   public static class MetadataAggregateContextBuilder {
     private List<NamedExpression> groupByExpressions;
     private List<SchemaPath> interestingColumns;
     private Boolean createNewAggregations;
+    private MetadataType metadataLevel;
     private List<SchemaPath> excludedColumns;
 
     public MetadataAggregateContextBuilder groupByExpressions(List<NamedExpression> groupByExpressions) {
@@ -90,6 +108,11 @@ public class MetadataAggregateContext {
       return this;
     }
 
+    public MetadataAggregateContextBuilder metadataLevel(MetadataType metadataLevel) {
+      this.metadataLevel = metadataLevel;
+      return this;
+    }
+
     public MetadataAggregateContextBuilder interestingColumns(List<SchemaPath> interestingColumns) {
       this.interestingColumns = interestingColumns;
       return this;
@@ -109,6 +132,7 @@ public class MetadataAggregateContext {
       Objects.requireNonNull(groupByExpressions, "groupByExpressions were not set");
       Objects.requireNonNull(createNewAggregations, "createNewAggregations was not set");
       Objects.requireNonNull(excludedColumns, "excludedColumns were not set");
+      Objects.requireNonNull(metadataLevel, "metadataLevel was not set");
       return new MetadataAggregateContext(this);
     }
   }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/base/AbstractGroupScanWithMetadata.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/base/AbstractGroupScanWithMetadata.java
index de13ee5..2371c6d 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/base/AbstractGroupScanWithMetadata.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/base/AbstractGroupScanWithMetadata.java
@@ -85,9 +85,9 @@ import com.fasterxml.jackson.annotation.JsonProperty;
 /**
  * Represents table group scan with metadata usage.
  */
-public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupScan {
+public abstract class AbstractGroupScanWithMetadata<P extends TableMetadataProvider> extends AbstractFileGroupScan {
 
-  protected TableMetadataProvider metadataProvider;
+  protected P metadataProvider;
 
   // table metadata info
   protected TableMetadata tableMetadata;
@@ -118,7 +118,7 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
     this.filter = filter;
   }
 
-  protected AbstractGroupScanWithMetadata(AbstractGroupScanWithMetadata that) {
+  protected AbstractGroupScanWithMetadata(AbstractGroupScanWithMetadata<P> that) {
     super(that.getUserName());
     this.columns = that.columns;
     this.filter = that.filter;
@@ -215,7 +215,7 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
   }
 
   @Override
-  public TableMetadataProvider getMetadataProvider() {
+  public P getMetadataProvider() {
     return metadataProvider;
   }
 
@@ -516,7 +516,13 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
   // partition pruning methods start
   @Override
   public List<SchemaPath> getPartitionColumns() {
-    return partitionColumns != null ? partitionColumns : new ArrayList<>();
+    if (partitionColumns == null) {
+      partitionColumns = metadataProvider.getPartitionColumns();
+      if (partitionColumns == null) {
+        partitionColumns = new ArrayList<>();
+      }
+    }
+    return partitionColumns;
   }
 
   @JsonIgnore
@@ -567,8 +573,8 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
     return ColumnExplorer.isPartitionColumn(optionManager, schemaPath) || implicitColNames.contains(schemaPath.getRootSegmentPath());
   }
 
-  // protected methods for internal usage
-  protected Map<Path, FileMetadata> getFilesMetadata() {
+  @JsonIgnore
+  public Map<Path, FileMetadata> getFilesMetadata() {
     if (files == null) {
       files = metadataProvider.getFilesMetadataMap();
     }
@@ -583,14 +589,16 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
     return tableMetadata;
   }
 
-  protected List<PartitionMetadata> getPartitionsMetadata() {
+  @JsonIgnore
+  public List<PartitionMetadata> getPartitionsMetadata() {
     if (partitions == null) {
       partitions = metadataProvider.getPartitionsMetadata();
     }
     return partitions;
   }
 
-  protected Map<Path, SegmentMetadata> getSegmentsMetadata() {
+  @JsonIgnore
+  public Map<Path, SegmentMetadata> getSegmentsMetadata() {
     if (segments == null) {
       segments = metadataProvider.getSegmentsMetadataMap();
     }
@@ -614,7 +622,7 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
    * This class is responsible for filtering different metadata levels.
    */
   protected abstract static class GroupScanWithMetadataFilterer<B extends GroupScanWithMetadataFilterer<B>> {
-    protected final AbstractGroupScanWithMetadata source;
+    protected final AbstractGroupScanWithMetadata<? extends TableMetadataProvider> source;
 
     protected boolean matchAllMetadata = false;
 
@@ -952,7 +960,7 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
         Iterable<T> metadataList,
         FilterPredicate<?> filterPredicate,
         OptionManager optionManager) {
-      List<T> qualifiedFiles = new ArrayList<>();
+      List<T> qualifiedMetadata = new ArrayList<>();
 
       for (T metadata : metadataList) {
         TupleMetadata schema = metadata.getSchema();
@@ -983,12 +991,12 @@ public abstract class AbstractGroupScanWithMetadata extends AbstractFileGroupSca
         if (matchAllMetadata) {
           matchAllMetadata = match == RowsMatch.ALL;
         }
-        qualifiedFiles.add(metadata);
+        qualifiedMetadata.add(metadata);
       }
-      if (qualifiedFiles.isEmpty()) {
+      if (qualifiedMetadata.isEmpty()) {
         matchAllMetadata = false;
       }
-      return qualifiedFiles;
+      return qualifiedMetadata;
     }
 
     protected abstract B self();
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/HashToMergeExchange.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/HashToMergeExchange.java
index 592f7c3..0828bea 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/HashToMergeExchange.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/HashToMergeExchange.java
@@ -32,8 +32,7 @@ import com.fasterxml.jackson.annotation.JsonProperty;
 import com.fasterxml.jackson.annotation.JsonTypeName;
 
 @JsonTypeName("hash-to-merge-exchange")
-public class HashToMergeExchange extends AbstractExchange{
-  static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(HashToMergeExchange.class);
+public class HashToMergeExchange extends AbstractExchange {
 
   private final LogicalExpression distExpr;
   private final List<Ordering> orderExprs;
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataAggPOP.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataHashAggPOP.java
similarity index 62%
copy from exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataAggPOP.java
copy to exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataHashAggPOP.java
index f31735c..35990c8 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataAggPOP.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataHashAggPOP.java
@@ -20,29 +20,41 @@ package org.apache.drill.exec.physical.config;
 import com.fasterxml.jackson.annotation.JsonCreator;
 import com.fasterxml.jackson.annotation.JsonProperty;
 import com.fasterxml.jackson.annotation.JsonTypeName;
-import org.apache.drill.exec.physical.base.PhysicalOperator;
 import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
+import org.apache.drill.exec.physical.base.PhysicalOperator;
+import org.apache.drill.exec.planner.physical.AggPrelBase.OperatorPhase;
+import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
 
 import java.util.Collections;
 
-@JsonTypeName("metadataAggregate")
-public class MetadataAggPOP extends StreamingAggregate {
+@JsonTypeName("metadataHashAggregate")
+public class MetadataHashAggPOP extends HashAggregate {
   private final MetadataAggregateContext context;
+  private final OperatorPhase phase;
 
   @JsonCreator
-  public MetadataAggPOP(@JsonProperty("child") PhysicalOperator child,
-      @JsonProperty("context") MetadataAggregateContext context) {
-    super(child, context.groupByExpressions(), Collections.emptyList());
+  public MetadataHashAggPOP(@JsonProperty("child") PhysicalOperator child,
+      @JsonProperty("context") MetadataAggregateContext context,
+      @JsonProperty("phase") OperatorPhase phase) {
+    super(child, phase, context.groupByExpressions(), Collections.emptyList(), 1.0F);
+    Preconditions.checkArgument(context.createNewAggregations(),
+        "Hash aggregate for metadata collecting should be used only for creating new aggregations.");
     this.context = context;
+    this.phase = phase;
   }
 
   @Override
   protected PhysicalOperator getNewWithChild(PhysicalOperator child) {
-    return new MetadataAggPOP(child, context);
+    return new MetadataHashAggPOP(child, context, phase);
   }
 
   @JsonProperty
   public MetadataAggregateContext getContext() {
     return context;
   }
+
+  @JsonProperty
+  public OperatorPhase getPhase() {
+    return phase;
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataAggPOP.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataStreamAggPOP.java
similarity index 72%
rename from exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataAggPOP.java
rename to exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataStreamAggPOP.java
index f31735c..e7220c3 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataAggPOP.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/config/MetadataStreamAggPOP.java
@@ -22,27 +22,36 @@ import com.fasterxml.jackson.annotation.JsonProperty;
 import com.fasterxml.jackson.annotation.JsonTypeName;
 import org.apache.drill.exec.physical.base.PhysicalOperator;
 import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
+import org.apache.drill.exec.planner.physical.AggPrelBase.OperatorPhase;
 
 import java.util.Collections;
 
-@JsonTypeName("metadataAggregate")
-public class MetadataAggPOP extends StreamingAggregate {
+@JsonTypeName("metadataStreamAggregate")
+public class MetadataStreamAggPOP extends StreamingAggregate {
   private final MetadataAggregateContext context;
+  private final OperatorPhase phase;
 
   @JsonCreator
-  public MetadataAggPOP(@JsonProperty("child") PhysicalOperator child,
-      @JsonProperty("context") MetadataAggregateContext context) {
+  public MetadataStreamAggPOP(@JsonProperty("child") PhysicalOperator child,
+      @JsonProperty("context") MetadataAggregateContext context,
+      @JsonProperty("phase") OperatorPhase phase) {
     super(child, context.groupByExpressions(), Collections.emptyList());
     this.context = context;
+    this.phase = phase;
   }
 
   @Override
   protected PhysicalOperator getNewWithChild(PhysicalOperator child) {
-    return new MetadataAggPOP(child, context);
+    return new MetadataStreamAggPOP(child, context, phase);
   }
 
   @JsonProperty
   public MetadataAggregateContext getContext() {
     return context;
   }
+
+  @JsonProperty
+  public OperatorPhase getPhase() {
+    return phase;
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java
index f905687..45c670b 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggBatch.java
@@ -18,11 +18,20 @@
 package org.apache.drill.exec.physical.impl.aggregate;
 
 import java.io.IOException;
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
 import java.util.Map;
 
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.Types;
+import org.apache.drill.exec.expr.DrillFuncHolderExpr;
 import org.apache.drill.exec.planner.physical.AggPrelBase;
+import org.apache.drill.exec.record.VectorContainer;
+import org.apache.drill.exec.vector.UntypedNullHolder;
+import org.apache.drill.exec.vector.UntypedNullVector;
+import org.apache.drill.exec.vector.complex.writer.BaseWriter;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.drill.common.exceptions.UserException;
 import org.apache.drill.common.expression.ErrorCollector;
@@ -69,20 +78,25 @@ import org.apache.drill.exec.vector.ValueVector;
 
 import com.sun.codemodel.JExpr;
 import com.sun.codemodel.JVar;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
-  static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(HashAggBatch.class);
+  static final Logger logger = LoggerFactory.getLogger(HashAggBatch.class);
 
   private HashAggregator aggregator;
-  private RecordBatch incoming;
+  protected RecordBatch incoming;
   private LogicalExpression[] aggrExprs;
   private TypedFieldId[] groupByOutFieldIds;
   private TypedFieldId[] aggrOutFieldIds;      // field ids for the outgoing batch
   private final List<Comparator> comparators;
   private BatchSchema incomingSchema;
   private boolean wasKilled;
+  private List<BaseWriter.ComplexWriter> complexWriters;
 
-  private int numGroupByExprs, numAggrExprs;
+  private int numGroupByExprs;
+  private int numAggrExprs;
+  private boolean firstBatch = true;
 
   // This map saves the mapping between outgoing column and incoming column.
   private Map<String, String> columnMapping;
@@ -136,13 +150,17 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
             valuesRowWidth += ((FixedWidthVector) w.getValueVector()).getValueWidth();
           }
         } else {
-          int columnWidth;
+          int columnWidth = 0;
+          TypeProtos.MajorType type = w.getField().getType();
           if (columnMapping.get(w.getValueVector().getField().getName()) == null) {
-             columnWidth = TypeHelper.getSize(w.getField().getType());
+            if (!Types.isComplex(type)) {
+              columnWidth = TypeHelper.getSize(type);
+            }
           } else {
-            RecordBatchSizer.ColumnSize columnSize = getRecordBatchSizer().getColumn(columnMapping.get(w.getValueVector().getField().getName()));
+            RecordBatchSizer.ColumnSize columnSize = getRecordBatchSizer()
+                .getColumn(columnMapping.get(w.getValueVector().getField().getName()));
             if (columnSize == null) {
-              columnWidth = TypeHelper.getSize(w.getField().getType());
+              columnWidth = TypeHelper.getSize(type);
             } else {
               columnWidth = columnSize.getAllocSizePerEntry();
             }
@@ -214,6 +232,11 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
   }
 
   @Override
+  public VectorContainer getOutgoingContainer() {
+    return container;
+  }
+
+  @Override
   public int getRecordCount() {
     if (state == BatchState.DONE) {
       return 0;
@@ -222,7 +245,7 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
   }
 
   @Override
-  public void buildSchema() throws SchemaChangeException {
+  public void buildSchema() {
     IterOutcome outcome = next(incoming);
     switch (outcome) {
       case NONE:
@@ -305,7 +328,23 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
       state = BatchState.DONE;
       // fall through
     case RETURN_OUTCOME:
-      return aggregator.getOutcome();
+      // rebuilds the schema in the case of complex writer expressions,
+      // since vectors would be added to batch run-time
+      IterOutcome outcome = aggregator.getOutcome();
+      switch (outcome) {
+        case OK:
+        case OK_NEW_SCHEMA:
+          if (firstBatch) {
+            if (CollectionUtils.isNotEmpty(complexWriters)) {
+              container.buildSchema(SelectionVectorMode.NONE);
+              outcome = IterOutcome.OK_NEW_SCHEMA;
+            }
+            firstBatch = false;
+          }
+          // fall thru
+        default:
+          return outcome;
+      }
 
     case UPDATE_AGGREGATOR:
       context.getExecutorState().fail(UserException.unsupportedError()
@@ -316,6 +355,7 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
           .build(logger));
       close();
       killIncoming(false);
+      firstBatch = false;
       return IterOutcome.STOP;
     default:
       throw new IllegalStateException(String.format("Unknown state %s.", out));
@@ -345,7 +385,12 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
     }
   }
 
-  private HashAggregator createAggregatorInternal() throws SchemaChangeException, ClassTransformationException,
+  @SuppressWarnings("unused") // used in generated code
+  public void addComplexWriter(final BaseWriter.ComplexWriter writer) {
+    complexWriters.add(writer);
+  }
+
+  protected HashAggregator createAggregatorInternal() throws SchemaChangeException, ClassTransformationException,
       IOException {
     CodeGenerator<HashAggregator> top = CodeGenerator.get(HashAggregator.TEMPLATE_DEFINITION, context.getOptions());
     ClassGenerator<HashAggregator> cg = top.getRoot();
@@ -355,8 +400,8 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
     // top.saveCodeForDebugging(true);
     container.clear();
 
-    numGroupByExprs = (popConfig.getGroupByExprs() != null) ? popConfig.getGroupByExprs().size() : 0;
-    numAggrExprs = (popConfig.getAggrExprs() != null) ? popConfig.getAggrExprs().size() : 0;
+    numGroupByExprs = (getKeyExpressions() != null) ? getKeyExpressions().size() : 0;
+    numAggrExprs = (getValueExpressions() != null) ? getValueExpressions().size() : 0;
     aggrExprs = new LogicalExpression[numAggrExprs];
     groupByOutFieldIds = new TypedFieldId[numGroupByExprs];
     aggrOutFieldIds = new TypedFieldId[numAggrExprs];
@@ -364,7 +409,7 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
     ErrorCollector collector = new ErrorCollectorImpl();
 
     for (int i = 0; i < numGroupByExprs; i++) {
-      NamedExpression ne = popConfig.getGroupByExprs().get(i);
+      NamedExpression ne = getKeyExpressions().get(i);
       final LogicalExpression expr =
           ExpressionTreeMaterializer.materialize(ne.getExpr(), incoming, collector, context.getFunctionRegistry());
       if (expr == null) {
@@ -381,7 +426,7 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
 
     int extraNonNullColumns = 0; // each of SUM, MAX and MIN gets an extra bigint column
     for (int i = 0; i < numAggrExprs; i++) {
-      NamedExpression ne = popConfig.getAggrExprs().get(i);
+      NamedExpression ne = getValueExpressions().get(i);
       final LogicalExpression expr = ExpressionTreeMaterializer.materialize(ne.getExpr(), incoming, collector, context.getFunctionRegistry());
 
       if (expr instanceof IfExpression) {
@@ -396,30 +441,45 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
         continue;
       }
 
-      final MaterializedField outputField = MaterializedField.create(ne.getRef().getAsNamePart().getName(), expr.getMajorType());
-      ValueVector vv = TypeHelper.getNewVector(outputField, oContext.getAllocator());
-      aggrOutFieldIds[i] = container.add(vv);
+      // Populate the complex writers for complex exprs
+      if (expr instanceof DrillFuncHolderExpr &&
+          ((DrillFuncHolderExpr) expr).getHolder().isComplexWriterFuncHolder()) {
+        if (complexWriters == null) {
+          complexWriters = new ArrayList<>();
+        } else {
+          complexWriters.clear();
+        }
+        // The reference name will be passed to ComplexWriter, used as the name of the output vector from the writer.
+        ((DrillFuncHolderExpr) expr).setFieldReference(ne.getRef());
+        MaterializedField field = MaterializedField.create(ne.getRef().getAsNamePart().getName(), UntypedNullHolder.TYPE);
+        container.add(new UntypedNullVector(field, container.getAllocator()));
+        aggrExprs[i] = expr;
+      } else {
+        MaterializedField outputField = MaterializedField.create(ne.getRef().getAsNamePart().getName(), expr.getMajorType());
+        ValueVector vv = TypeHelper.getNewVector(outputField, oContext.getAllocator());
+        aggrOutFieldIds[i] = container.add(vv);
 
-      aggrExprs[i] = new ValueVectorWriteExpression(aggrOutFieldIds[i], expr, true);
+        aggrExprs[i] = new ValueVectorWriteExpression(aggrOutFieldIds[i], expr, true);
 
-      if (expr instanceof FunctionHolderExpression) {
-        String funcName = ((FunctionHolderExpression) expr).getName();
-        if (funcName.equals("sum") || funcName.equals("max") || funcName.equals("min")) {
-          extraNonNullColumns++;
-        }
-        List<LogicalExpression> args = ((FunctionCall) ne.getExpr()).args;
-        if (!args.isEmpty()) {
-          if (args.get(0) instanceof SchemaPath) {
-            columnMapping.put(outputField.getName(), ((SchemaPath) args.get(0)).getAsNamePart().getName());
-          } else if (args.get(0) instanceof FunctionCall) {
-            FunctionCall functionCall = (FunctionCall) args.get(0);
-            if (functionCall.args.get(0) instanceof SchemaPath) {
-              columnMapping.put(outputField.getName(), ((SchemaPath) functionCall.args.get(0)).getAsNamePart().getName());
+        if (expr instanceof FunctionHolderExpression) {
+          String funcName = ((FunctionHolderExpression) expr).getName();
+          if (funcName.equals("sum") || funcName.equals("max") || funcName.equals("min")) {
+            extraNonNullColumns++;
+          }
+          List<LogicalExpression> args = ((FunctionCall) ne.getExpr()).args;
+          if (!args.isEmpty()) {
+            if (args.get(0) instanceof SchemaPath) {
+              columnMapping.put(outputField.getName(), ((SchemaPath) args.get(0)).getAsNamePart().getName());
+            } else if (args.get(0) instanceof FunctionCall) {
+              FunctionCall functionCall = (FunctionCall) args.get(0);
+              if (functionCall.args.get(0) instanceof SchemaPath) {
+                columnMapping.put(outputField.getName(), ((SchemaPath) functionCall.args.get(0)).getAsNamePart().getName());
+              }
             }
           }
+        } else {
+          columnMapping.put(outputField.getName(), ne.getRef().getAsNamePart().getName());
         }
-      } else {
-        columnMapping.put(outputField.getName(), ne.getRef().getAsNamePart().getName());
       }
     }
 
@@ -433,7 +493,7 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
     HashTableConfig htConfig =
         // TODO - fix the validator on this option
         new HashTableConfig((int)context.getOptions().getOption(ExecConstants.MIN_HASH_TABLE_SIZE),
-            HashTable.DEFAULT_LOAD_FACTOR, popConfig.getGroupByExprs(), null /* no probe exprs */, comparators);
+            HashTable.DEFAULT_LOAD_FACTOR, getKeyExpressions(), null /* no probe exprs */, comparators);
 
     agg.setup(popConfig, htConfig, context, oContext, incoming, this,
         aggrExprs,
@@ -445,6 +505,14 @@ public class HashAggBatch extends AbstractRecordBatch<HashAggregate> {
     return agg;
   }
 
+  protected List<NamedExpression> getKeyExpressions() {
+    return popConfig.getGroupByExprs();
+  }
+
+  protected List<NamedExpression> getValueExpressions() {
+    return popConfig.getAggrExprs();
+  }
+
   private void setupUpdateAggrValues(ClassGenerator<HashAggregator> cg) {
     cg.setMappingSet(UpdateAggrValuesMapping);
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
index d166353..7fcfd99 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/HashAggTemplate.java
@@ -34,6 +34,8 @@ import org.apache.drill.common.exceptions.UserException;
 import org.apache.drill.common.expression.ExpressionPosition;
 import org.apache.drill.common.expression.FieldReference;
 import org.apache.drill.common.expression.LogicalExpression;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.common.types.Types;
 import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.cache.VectorSerializer.Writer;
 import org.apache.drill.exec.compile.sig.RuntimeOverridden;
@@ -779,12 +781,19 @@ public abstract class HashAggTemplate implements HashAggregator {
     while (outgoingIter.hasNext()) {
       ValueVector vv = outgoingIter.next().getValueVector();
 
-      AllocationHelper.allocatePrecomputedChildCount(vv, records, maxColumnWidth, 0);
+      // Prevent allocating complex vectors here to avoid losing their content
+      // since their writers will still be used in generated code
+      TypeProtos.MajorType majorType = vv.getField().getType();
+      if (!Types.isComplex(majorType)
+          && !Types.isUnion(majorType)
+          && !Types.isRepeated(majorType)) {
+        AllocationHelper.allocatePrecomputedChildCount(vv, records, maxColumnWidth, 0);
+      }
     }
 
     long memAdded = allocator.getAllocatedMemory() - allocatedBefore;
-    if ( memAdded > estOutgoingAllocSize ) {
-      logger.trace("Output values allocated {} but the estimate was only {}. Adjusting ...",memAdded,estOutgoingAllocSize);
+    if (memAdded > estOutgoingAllocSize) {
+      logger.trace("Output values allocated {} but the estimate was only {}. Adjusting ...", memAdded, estOutgoingAllocSize);
       estOutgoingAllocSize = memAdded;
     }
     outContainer.setRecordCount(records);
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java
index 8fc7118..c3b504a 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/aggregate/StreamingAggBatch.java
@@ -45,7 +45,6 @@ import org.apache.drill.exec.expr.CodeGenerator;
 import org.apache.drill.exec.expr.DrillFuncHolderExpr;
 import org.apache.drill.exec.expr.ExpressionTreeMaterializer;
 import org.apache.drill.exec.expr.HoldingContainerExpression;
-import org.apache.drill.exec.expr.TypeHelper;
 import org.apache.drill.exec.expr.ValueVectorWriteExpression;
 import org.apache.drill.exec.expr.fn.FunctionGenerationHelper;
 import org.apache.drill.exec.ops.FragmentContext;
@@ -476,8 +475,8 @@ public class StreamingAggBatch extends AbstractRecordBatch<StreamingAggregate> {
       keyExprs[i] = expr;
       MaterializedField outputField = MaterializedField.create(ne.getRef().getLastSegment().getNameSegment().getPath(),
                                                                       expr.getMajorType());
-      ValueVector vector = TypeHelper.getNewVector(outputField, oContext.getAllocator());
-      keyOutputIds[i] = container.add(vector);
+      container.addOrGet(outputField);
+      keyOutputIds[i] = container.getValueVectorId(ne.getRef());
     }
 
     for (int i = 0; i < valueExprs.length; i++) {
@@ -501,15 +500,15 @@ public class StreamingAggBatch extends AbstractRecordBatch<StreamingAggregate> {
           complexWriters.clear();
         }
         // The reference name will be passed to ComplexWriter, used as the name of the output vector from the writer.
-        ((DrillFuncHolderExpr) expr).getFieldReference(ne.getRef());
+        ((DrillFuncHolderExpr) expr).setFieldReference(ne.getRef());
         MaterializedField field = MaterializedField.create(ne.getRef().getAsNamePart().getName(), UntypedNullHolder.TYPE);
         container.add(new UntypedNullVector(field, container.getAllocator()));
         valueExprs[i] = expr;
       } else {
         MaterializedField outputField = MaterializedField.create(ne.getRef().getLastSegment().getNameSegment().getPath(),
             expr.getMajorType());
-        ValueVector vector = TypeHelper.getNewVector(outputField, oContext.getAllocator());
-        TypedFieldId id = container.add(vector);
+        container.addOrGet(outputField);
+        TypedFieldId id = container.getValueVectorId(ne.getRef());
         valueExprs[i] = new ValueVectorWriteExpression(id, expr, true);
       }
     }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/flatten/FlattenRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/flatten/FlattenRecordBatch.java
index cee7625..34e9d7f 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/flatten/FlattenRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/flatten/FlattenRecordBatch.java
@@ -440,7 +440,7 @@ public class FlattenRecordBatch extends AbstractSingleRecordBatch<FlattenPOP> {
         }
 
         // The reference name will be passed to ComplexWriter, used as the name of the output vector from the writer.
-        ((DrillFuncHolderExpr) expr).getFieldReference(namedExpression.getRef());
+        ((DrillFuncHolderExpr) expr).setFieldReference(namedExpression.getRef());
         cg.addExpr(expr);
       } else {
         // need to do evaluation.
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggregateHelper.java
similarity index 68%
rename from exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatch.java
rename to exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggregateHelper.java
index 51dfb15..1cca788 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggregateHelper.java
@@ -27,23 +27,17 @@ import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.expression.ValueExpressions;
 import org.apache.drill.common.logical.data.NamedExpression;
 import org.apache.drill.common.types.TypeProtos;
-import org.apache.drill.exec.ExecConstants;
-import org.apache.drill.exec.exception.ClassTransformationException;
-import org.apache.drill.exec.exception.OutOfMemoryException;
-import org.apache.drill.exec.exception.SchemaChangeException;
-import org.apache.drill.exec.metastore.analyze.MetastoreAnalyzeConstants;
-import org.apache.drill.exec.ops.FragmentContext;
-import org.apache.drill.exec.physical.config.MetadataAggPOP;
-import org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch;
-import org.apache.drill.exec.physical.impl.aggregate.StreamingAggregator;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.metastore.analyze.AnalyzeColumnUtils;
+import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
+import org.apache.drill.exec.metastore.analyze.MetastoreAnalyzeConstants;
+import org.apache.drill.exec.planner.physical.AggPrelBase;
 import org.apache.drill.exec.planner.types.DrillRelDataTypeSystem;
 import org.apache.drill.exec.record.BatchSchema;
 import org.apache.drill.exec.record.MaterializedField;
-import org.apache.drill.exec.record.RecordBatch;
+import org.apache.drill.metastore.metadata.MetadataType;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 
-import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collection;
 import java.util.Collections;
@@ -53,26 +47,32 @@ import java.util.Map;
 import java.util.stream.StreamSupport;
 
 /**
- * Operator which adds aggregate calls for all incoming columns to calculate required metadata and produces aggregations.
- * If aggregation is performed on top of another aggregation, required aggregate calls for merging metadata will be added.
+ * Helper class for constructing aggregate value expressions required for metadata collecting.
  */
-public class MetadataAggBatch extends StreamingAggBatch {
-
-  private List<NamedExpression> valueExpressions;
+public class MetadataAggregateHelper {
+  private final List<NamedExpression> valueExpressions;
+  private final MetadataAggregateContext context;
+  private final ColumnNamesOptions columnNamesOptions;
+  private final BatchSchema schema;
+  private final AggPrelBase.OperatorPhase phase;
 
-  public MetadataAggBatch(MetadataAggPOP popConfig, RecordBatch incoming, FragmentContext context) throws OutOfMemoryException {
-    super(popConfig, incoming, context);
+  public MetadataAggregateHelper(MetadataAggregateContext context, ColumnNamesOptions columnNamesOptions,
+      BatchSchema schema, AggPrelBase.OperatorPhase phase) {
+    this.context = context;
+    this.columnNamesOptions = columnNamesOptions;
+    this.schema = schema;
+    this.phase = phase;
+    this.valueExpressions = new ArrayList<>();
+    createAggregatorInternal();
   }
 
-  @Override
-  protected StreamingAggregator createAggregatorInternal()
-      throws SchemaChangeException, ClassTransformationException, IOException {
-    valueExpressions = new ArrayList<>();
-    MetadataAggPOP popConfig = (MetadataAggPOP) this.popConfig;
+  public List<NamedExpression> getValueExpressions() {
+    return valueExpressions;
+  }
 
-    List<SchemaPath> excludedColumns = popConfig.getContext().excludedColumns();
+  private void createAggregatorInternal() {
+    List<SchemaPath> excludedColumns = context.excludedColumns();
 
-    BatchSchema schema = incoming.getSchema();
     // Iterates through input expressions and adds aggregate calls for table fields
     // to collect required statistics (MIN, MAX, COUNT, etc.) or aggregate calls to merge incoming metadata
     getUnflattenedFileds(Lists.newArrayList(schema), null)
@@ -90,19 +90,36 @@ public class MetadataAggBatch extends StreamingAggBatch {
           fieldsList.add(FieldReference.getWithQuotedRef(filedName));
         });
 
-    if (popConfig.getContext().createNewAggregations()) {
+    if (createNewAggregations()) {
       addMetadataAggregateCalls();
       // infer schema from incoming data
       addSchemaCall(fieldsList);
-      addNewMetadataAggregations();
+      // adds any_value(`location`) call for SEGMENT level
+      if (context.metadataLevel() == MetadataType.SEGMENT) {
+        addLocationAggCall(columnNamesOptions.fullyQualifiedName());
+      }
     } else {
-      addCollectListCall(fieldsList);
+      if (!context.createNewAggregations()) {
+        // collects incoming metadata
+        addCollectListCall(fieldsList);
+      }
       addMergeSchemaCall();
+      String locationField = MetastoreAnalyzeConstants.LOCATION_FIELD;
+
+      if (context.createNewAggregations()) {
+        locationField = columnNamesOptions.fullyQualifiedName();
+      }
+
+      if (context.metadataLevel() == MetadataType.SEGMENT) {
+        addParentLocationAggCall();
+      } else {
+        addLocationAggCall(locationField);
+      }
     }
 
     for (SchemaPath excludedColumn : excludedColumns) {
-      if (excludedColumn.equals(SchemaPath.getSimplePath(context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_START_COLUMN_LABEL)))
-          || excludedColumn.equals(SchemaPath.getSimplePath(context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_LENGTH_COLUMN_LABEL)))) {
+      if (excludedColumn.equals(SchemaPath.getSimplePath(columnNamesOptions.rowGroupStart()))
+          || excludedColumn.equals(SchemaPath.getSimplePath(columnNamesOptions.rowGroupLength()))) {
         LogicalExpression lastModifiedTime = new FunctionCall("any_value",
             Collections.singletonList(
                 FieldReference.getWithQuotedRef(excludedColumn.getRootSegmentPath())),
@@ -113,20 +130,69 @@ public class MetadataAggBatch extends StreamingAggBatch {
       }
     }
 
-    addMaxLastModifiedCall();
+    addLastModifiedCall();
+  }
+
+  /**
+   * Adds any_value(parentPath(`location`)) aggregate call to the value expressions list.
+   */
+  private void addParentLocationAggCall() {
+    valueExpressions.add(
+        new NamedExpression(
+            new FunctionCall(
+                "any_value",
+                Collections.singletonList(
+                    new FunctionCall(
+                        "parentPath",
+                        Collections.singletonList(SchemaPath.getSimplePath(MetastoreAnalyzeConstants.LOCATION_FIELD)),
+                        ExpressionPosition.UNKNOWN)),
+                ExpressionPosition.UNKNOWN),
+            FieldReference.getWithQuotedRef(MetastoreAnalyzeConstants.LOCATION_FIELD)));
+  }
+
+  /**
+   * Adds any_value(`location`) aggregate call to the value expressions list.
+   *
+   * @param locationField name of the location field
+   */
+  private void addLocationAggCall(String locationField) {
+    valueExpressions.add(
+        new NamedExpression(
+            new FunctionCall(
+                "any_value",
+                Collections.singletonList(SchemaPath.getSimplePath(locationField)),
+                ExpressionPosition.UNKNOWN),
+            FieldReference.getWithQuotedRef(MetastoreAnalyzeConstants.LOCATION_FIELD)));
+  }
 
-    return super.createAggregatorInternal();
+  /**
+   * Checks whether incoming data is not grouped, so corresponding aggregate calls should be created.
+   *
+   * @return {@code true} if incoming data is not grouped, {@code false} otherwise.
+   */
+  private boolean createNewAggregations() {
+    return context.createNewAggregations()
+        && (phase == AggPrelBase.OperatorPhase.PHASE_1of2
+        || phase == AggPrelBase.OperatorPhase.PHASE_1of1);
   }
 
   /**
    * Adds {@code max(`lastModifiedTime`)} function call to the value expressions list.
    */
-  private void addMaxLastModifiedCall() {
-    String lastModifiedColumn = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
-    LogicalExpression lastModifiedTime = new FunctionCall("max",
-        Collections.singletonList(
-            FieldReference.getWithQuotedRef(lastModifiedColumn)),
-        ExpressionPosition.UNKNOWN);
+  private void addLastModifiedCall() {
+    String lastModifiedColumn = columnNamesOptions.lastModifiedTime();
+    LogicalExpression lastModifiedTime;
+    if (createNewAggregations()) {
+      lastModifiedTime = new FunctionCall("any_value",
+          Collections.singletonList(
+              FieldReference.getWithQuotedRef(lastModifiedColumn)),
+          ExpressionPosition.UNKNOWN);
+    } else {
+      lastModifiedTime = new FunctionCall("max",
+          Collections.singletonList(
+              FieldReference.getWithQuotedRef(lastModifiedColumn)),
+          ExpressionPosition.UNKNOWN);
+    }
 
     valueExpressions.add(new NamedExpression(lastModifiedTime,
         FieldReference.getWithQuotedRef(lastModifiedColumn)));
@@ -140,8 +206,7 @@ public class MetadataAggBatch extends StreamingAggBatch {
    */
   private void addCollectListCall(List<LogicalExpression> fieldList) {
     ArrayList<LogicalExpression> collectListArguments = new ArrayList<>(fieldList);
-    MetadataAggPOP popConfig = (MetadataAggPOP) this.popConfig;
-    List<SchemaPath> excludedColumns = popConfig.getContext().excludedColumns();
+    List<SchemaPath> excludedColumns = context.excludedColumns();
     // populate columns which weren't included in the schema, but should be collected to the COLLECTED_MAP_FIELD
     for (SchemaPath logicalExpressions : excludedColumns) {
       // adds string literal with field name to the list
@@ -170,17 +235,6 @@ public class MetadataAggBatch extends StreamingAggBatch {
   }
 
   /**
-   * Adds {@code collect_to_list_varchar(`fqn`)} call to collect file paths into the list.
-   */
-  private void addNewMetadataAggregations() {
-    LogicalExpression locationsExpr = new FunctionCall("collect_to_list_varchar",
-        Collections.singletonList(SchemaPath.getSimplePath(context.getOptions().getString(ExecConstants.IMPLICIT_FQN_COLUMN_LABEL))),
-        ExpressionPosition.UNKNOWN);
-
-    valueExpressions.add(new NamedExpression(locationsExpr, FieldReference.getWithQuotedRef(MetastoreAnalyzeConstants.LOCATIONS_FIELD)));
-  }
-
-  /**
    * Adds a call to {@code schema()}} function with specified fields list
    * as arguments of this function to obtain their schema.
    *
@@ -220,8 +274,7 @@ public class MetadataAggBatch extends StreamingAggBatch {
     for (MaterializedField field : fields) {
       // statistics collecting is not supported for array types
       if (field.getType().getMode() != TypeProtos.DataMode.REPEATED) {
-        MetadataAggPOP popConfig = (MetadataAggPOP) this.popConfig;
-        List<SchemaPath> excludedColumns = popConfig.getContext().excludedColumns();
+        List<SchemaPath> excludedColumns = context.excludedColumns();
         // excludedColumns are applied for root fields only
         if (parentFields != null || !excludedColumns.contains(SchemaPath.getSimplePath(field.getName()))) {
           List<String> currentPath;
@@ -231,12 +284,12 @@ public class MetadataAggBatch extends StreamingAggBatch {
             currentPath = new ArrayList<>(parentFields);
             currentPath.add(field.getName());
           }
-          if (field.getType().getMinorType() == TypeProtos.MinorType.MAP && popConfig.getContext().createNewAggregations()) {
+          if (field.getType().getMinorType() == TypeProtos.MinorType.MAP && createNewAggregations()) {
             fieldNameRefMap.putAll(getUnflattenedFileds(field.getChildren(), currentPath));
           } else {
             SchemaPath schemaPath = SchemaPath.getCompoundPath(currentPath.toArray(new String[0]));
             // adds backticks for popConfig.createNewAggregations() to ensure that field will be parsed correctly
-            String name = popConfig.getContext().createNewAggregations() ? schemaPath.toExpr() : schemaPath.getRootSegmentPath();
+            String name = createNewAggregations() ? schemaPath.toExpr() : schemaPath.getRootSegmentPath();
             fieldNameRefMap.put(name, new FieldReference(schemaPath));
           }
         }
@@ -254,9 +307,8 @@ public class MetadataAggBatch extends StreamingAggBatch {
    * @param fieldName field name
    */
   private void addColumnAggregateCalls(FieldReference fieldRef, String fieldName) {
-    MetadataAggPOP popConfig = (MetadataAggPOP) this.popConfig;
-    List<SchemaPath> interestingColumns = popConfig.getContext().interestingColumns();
-    if (popConfig.getContext().createNewAggregations()) {
+    List<SchemaPath> interestingColumns = context.interestingColumns();
+    if (createNewAggregations()) {
       if (interestingColumns == null || interestingColumns.contains(fieldRef)) {
         // collect statistics for all or only interesting columns if they are specified
         AnalyzeColumnUtils.COLUMN_STATISTICS_FUNCTIONS.forEach((statisticsKind, sqlKind) -> {
@@ -280,9 +332,4 @@ public class MetadataAggBatch extends StreamingAggBatch {
       valueExpressions.add(new NamedExpression(functionCall, fieldRef));
     }
   }
-
-  @Override
-  protected List<NamedExpression> getValueExpressions() {
-    return valueExpressions;
-  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java
index 0c94e3d..9ccae49 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataControllerBatch.java
@@ -21,8 +21,8 @@ import org.apache.commons.lang3.StringUtils;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.types.TypeProtos;
 import org.apache.drill.common.types.Types;
-import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.metastore.analyze.AnalyzeColumnUtils;
 import org.apache.drill.exec.metastore.analyze.MetastoreAnalyzeConstants;
 import org.apache.drill.exec.ops.FragmentContext;
@@ -105,6 +105,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
   private final Map<String, MetadataInfo> metadataToHandle;
   private final StatisticsRecordCollector statisticsCollector;
   private final List<TableMetadataUnit> metadataUnits;
+  private final ColumnNamesOptions columnNamesOptions;
 
   private boolean firstLeft = true;
   private boolean firstRight = true;
@@ -123,6 +124,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
             .collect(Collectors.toMap(MetadataInfo::identifier, Function.identity()));
     this.metadataUnits = new ArrayList<>();
     this.statisticsCollector = new StatisticsCollectorImpl();
+    this.columnNamesOptions = new ColumnNamesOptions(context.getOptions());
   }
 
   protected boolean setupNewSchema() {
@@ -132,6 +134,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
     container.addOrGet(MetastoreAnalyzeConstants.SUMMARY_FIELD_NAME, Types.required(TypeProtos.MinorType.VARCHAR), null);
 
     container.buildSchema(BatchSchema.SelectionVectorMode.NONE);
+    container.setEmpty();
 
     return true;
   }
@@ -428,7 +431,6 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
   private PartitionMetadata getPartitionMetadata(TupleReader reader, List<StatisticsHolder> metadataStatistics,
       Map<SchemaPath, ColumnStatistics> columnStatistics, int nestingLevel) {
     List<String> segmentColumns = popConfig.getContext().segmentColumns();
-    String lastModifiedTimeCol = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
 
     String segmentKey = segmentColumns.size() > 0
         ? reader.column(segmentColumns.iterator().next()).scalar().getString()
@@ -450,7 +452,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
         .columnsStatistics(columnStatistics)
         .metadataStatistics(metadataStatistics)
         .locations(getIncomingLocations(reader))
-        .lastModifiedTime(Long.parseLong(reader.column(lastModifiedTimeCol).scalar().getString()))
+        .lastModifiedTime(Long.parseLong(reader.column(columnNamesOptions.lastModifiedTime()).scalar().getString()))
 //            .column(SchemaPath.getSimplePath("dir1"))
 //            .partitionValues()
         .schema(TupleMetadata.of(reader.column(MetastoreAnalyzeConstants.SCHEMA_FIELD).scalar().getString()))
@@ -460,7 +462,6 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
   @SuppressWarnings("unchecked")
   private BaseTableMetadata getTableMetadata(TupleReader reader, List<StatisticsHolder> metadataStatistics,
       Map<SchemaPath, ColumnStatistics> columnStatistics) {
-    String lastModifiedTimeCol = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
     List<StatisticsHolder> updatedMetaStats = new ArrayList<>(metadataStatistics);
     updatedMetaStats.add(new StatisticsHolder(popConfig.getContext().analyzeMetadataLevel(), TableStatisticsKind.ANALYZE_METADATA_LEVEL));
 
@@ -477,7 +478,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
         .partitionKeys(Collections.emptyMap())
         .interestingColumns(popConfig.getContext().interestingColumns())
         .location(popConfig.getContext().location())
-        .lastModifiedTime(Long.parseLong(reader.column(lastModifiedTimeCol).scalar().getString()))
+        .lastModifiedTime(Long.parseLong(reader.column(columnNamesOptions.lastModifiedTime()).scalar().getString()))
         .schema(TupleMetadata.of(reader.column(MetastoreAnalyzeConstants.SCHEMA_FIELD).scalar().getString()))
         .build();
 
@@ -494,7 +495,6 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
   private SegmentMetadata getSegmentMetadata(TupleReader reader, List<StatisticsHolder> metadataStatistics,
       Map<SchemaPath, ColumnStatistics> columnStatistics, int nestingLevel) {
     List<String> segmentColumns = popConfig.getContext().segmentColumns();
-    String lastModifiedTimeCol = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
 
     String segmentKey = segmentColumns.size() > 0
         ? reader.column(segmentColumns.iterator().next()).scalar().getString()
@@ -521,7 +521,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
         .locations(getIncomingLocations(reader))
         .column(segmentColumns.size() > 0 ? SchemaPath.getSimplePath(segmentColumns.get(nestingLevel - 1)) : null)
         .partitionValues(partitionValues)
-        .lastModifiedTime(Long.parseLong(reader.column(lastModifiedTimeCol).scalar().getString()))
+        .lastModifiedTime(Long.parseLong(reader.column(columnNamesOptions.lastModifiedTime()).scalar().getString()))
         .schema(TupleMetadata.of(reader.column(MetastoreAnalyzeConstants.SCHEMA_FIELD).scalar().getString()))
         .build();
   }
@@ -529,7 +529,6 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
   private FileMetadata getFileMetadata(TupleReader reader, List<StatisticsHolder> metadataStatistics,
       Map<SchemaPath, ColumnStatistics> columnStatistics, int nestingLevel) {
     List<String> segmentColumns = popConfig.getContext().segmentColumns();
-    String lastModifiedTimeCol = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
 
     String segmentKey = segmentColumns.size() > 0
         ? reader.column(segmentColumns.iterator().next()).scalar().getString()
@@ -555,15 +554,13 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
         .columnsStatistics(columnStatistics)
         .metadataStatistics(metadataStatistics)
         .path(path)
-        .lastModifiedTime(Long.parseLong(reader.column(lastModifiedTimeCol).scalar().getString()))
+        .lastModifiedTime(Long.parseLong(reader.column(columnNamesOptions.lastModifiedTime()).scalar().getString()))
         .schema(TupleMetadata.of(reader.column(MetastoreAnalyzeConstants.SCHEMA_FIELD).scalar().getString()))
         .build();
   }
 
   private RowGroupMetadata getRowGroupMetadata(TupleReader reader,List<StatisticsHolder> metadataStatistics,
       Map<SchemaPath, ColumnStatistics> columnStatistics, int nestingLevel) {
-    String lastModifiedTimeCol = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
-    String rgi = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_INDEX_COLUMN_LABEL);
 
     List<String> segmentColumns = popConfig.getContext().segmentColumns();
     String segmentKey = segmentColumns.size() > 0
@@ -577,7 +574,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
 
     Path path = new Path(reader.column(MetastoreAnalyzeConstants.LOCATION_FIELD).scalar().getString());
 
-    int rowGroupIndex = Integer.parseInt(reader.column(rgi).scalar().getString());
+    int rowGroupIndex = Integer.parseInt(reader.column(columnNamesOptions.rowGroupIndex()).scalar().getString());
 
     String metadataIdentifier = MetadataIdentifierUtils.getRowGroupMetadataIdentifier(partitionValues, path, rowGroupIndex);
 
@@ -595,7 +592,7 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
         .hostAffinity(Collections.emptyMap())
         .rowGroupIndex(rowGroupIndex)
         .path(path)
-        .lastModifiedTime(Long.parseLong(reader.column(lastModifiedTimeCol).scalar().getString()))
+        .lastModifiedTime(Long.parseLong(reader.column(columnNamesOptions.lastModifiedTime()).scalar().getString()))
         .schema(TupleMetadata.of(reader.column(MetastoreAnalyzeConstants.SCHEMA_FIELD).scalar().getString()))
         .build();
   }
@@ -644,19 +641,22 @@ public class MetadataControllerBatch extends AbstractBinaryRecordBatch<MetadataC
   @SuppressWarnings("unchecked")
   private List<StatisticsHolder> getMetadataStatistics(TupleReader reader, TupleMetadata columnMetadata) {
     List<StatisticsHolder> metadataStatistics = new ArrayList<>();
-    String rgs = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_START_COLUMN_LABEL);
-    String rgl = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_LENGTH_COLUMN_LABEL);
+    String rgs = columnNamesOptions.rowGroupStart();
+    String rgl = columnNamesOptions.rowGroupLength();
     for (ColumnMetadata column : columnMetadata) {
       String columnName = column.name();
+      ObjectReader objectReader = reader.column(columnName);
       if (AnalyzeColumnUtils.isMetadataStatisticsField(columnName)) {
-        metadataStatistics.add(new StatisticsHolder(reader.column(columnName).getObject(),
+        metadataStatistics.add(new StatisticsHolder(objectReader.getObject(),
             AnalyzeColumnUtils.getStatisticsKind(columnName)));
-      } else if (columnName.equals(rgs)) {
-        metadataStatistics.add(new StatisticsHolder(Long.parseLong(reader.column(columnName).scalar().getString()),
-            new BaseStatisticsKind(ExactStatisticsConstants.START, true)));
-      } else if (columnName.equals(rgl)) {
-        metadataStatistics.add(new StatisticsHolder(Long.parseLong(reader.column(columnName).scalar().getString()),
-            new BaseStatisticsKind(ExactStatisticsConstants.LENGTH, true)));
+      } else if (!objectReader.isNull()) {
+        if (columnName.equals(rgs)) {
+          metadataStatistics.add(new StatisticsHolder(Long.parseLong(objectReader.scalar().getString()),
+              new BaseStatisticsKind(ExactStatisticsConstants.START, true)));
+        } else if (columnName.equals(rgl)) {
+          metadataStatistics.add(new StatisticsHolder(Long.parseLong(objectReader.scalar().getString()),
+              new BaseStatisticsKind(ExactStatisticsConstants.LENGTH, true)));
+        }
       }
     }
     return metadataStatistics;
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHandlerBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHandlerBatch.java
index 7444a04..22e90fa 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHandlerBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHandlerBatch.java
@@ -20,8 +20,8 @@ package org.apache.drill.exec.physical.impl.metadata;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.types.TypeProtos.MinorType;
 import org.apache.drill.common.types.Types;
-import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.metastore.analyze.MetastoreAnalyzeConstants;
 import org.apache.drill.exec.ops.FragmentContext;
 import org.apache.drill.exec.physical.config.MetadataHandlerPOP;
@@ -81,6 +81,7 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
   private final Tables tables;
   private final MetadataType metadataType;
   private final Map<String, MetadataInfo> metadataToHandle;
+  private final ColumnNamesOptions columnNamesOptions;
 
   private boolean firstBatch = true;
 
@@ -89,6 +90,7 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
     super(popConfig, context, incoming);
     this.tables = context.getMetastoreRegistry().get().tables();
     this.metadataType = popConfig.getContext().metadataType();
+    this.columnNamesOptions = new ColumnNamesOptions(context.getOptions());
     this.metadataToHandle = popConfig.getContext().metadataToHandle() != null
         ? popConfig.getContext().metadataToHandle().stream()
             .collect(Collectors.toMap(MetadataInfo::identifier, Function.identity()))
@@ -101,7 +103,7 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
     // 2. For the case when incoming operator returned nothing - no updated underlying metadata was found.
     // 3. Fetches metadata which should be handled but wasn't returned by incoming batch from the Metastore
 
-    IterOutcome outcome = next(incoming);
+    IterOutcome outcome = incoming.getRecordCount() == 0 ? next(incoming) : getLastKnownOutcome();
 
     switch (outcome) {
       case NONE:
@@ -117,8 +119,7 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
             outcome = IterOutcome.OK;
           }
         }
-        doWorkInternal();
-        return outcome;
+        // fall thru
       case OK:
         assert !firstBatch : "First batch should be OK_NEW_SCHEMA";
         doWorkInternal();
@@ -286,14 +287,14 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
     }
 
     if (metadataType == MetadataType.ROW_GROUP) {
-      schemaBuilder.addNullable(context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_INDEX_COLUMN_LABEL), MinorType.VARCHAR);
-      schemaBuilder.addNullable(context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_START_COLUMN_LABEL), MinorType.VARCHAR);
-      schemaBuilder.addNullable(context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_LENGTH_COLUMN_LABEL), MinorType.VARCHAR);
+      schemaBuilder.addNullable(columnNamesOptions.rowGroupIndex(), MinorType.VARCHAR);
+      schemaBuilder.addNullable(columnNamesOptions.rowGroupStart(), MinorType.VARCHAR);
+      schemaBuilder.addNullable(columnNamesOptions.rowGroupLength(), MinorType.VARCHAR);
     }
 
     schemaBuilder
         .addNullable(MetastoreAnalyzeConstants.SCHEMA_FIELD, MinorType.VARCHAR)
-        .addNullable(context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL), MinorType.VARCHAR)
+        .addNullable(columnNamesOptions.lastModifiedTime(), MinorType.VARCHAR)
         .add(MetastoreAnalyzeConstants.METADATA_TYPE, MinorType.VARCHAR);
 
     ResultSetLoaderImpl.ResultSetOptions options = new OptionBuilder()
@@ -307,11 +308,6 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
   private <T extends BaseMetadata & LocationProvider> VectorContainer writeMetadataUsingBatchSchema(List<T> metadataList) {
     Preconditions.checkArgument(!metadataList.isEmpty(), "Metadata list shouldn't be empty.");
 
-    String lastModifiedTimeField = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
-    String rgiField = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_INDEX_COLUMN_LABEL);
-    String rgsField = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_START_COLUMN_LABEL);
-    String rglField = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_LENGTH_COLUMN_LABEL);
-
     ResultSetLoader resultSetLoader = getResultSetLoaderWithBatchSchema();
     resultSetLoader.startBatch();
     RowSetLoader rowWriter = resultSetLoader.writer();
@@ -352,13 +348,13 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
           arguments.add(new Object[]{});
         } else if (fieldName.equals(MetastoreAnalyzeConstants.SCHEMA_FIELD)) {
           arguments.add(metadata.getSchema().jsonString());
-        } else if (fieldName.equals(lastModifiedTimeField)) {
+        } else if (fieldName.equals(columnNamesOptions.lastModifiedTime())) {
           arguments.add(String.valueOf(metadata.getLastModifiedTime()));
-        } else if (fieldName.equals(rgiField)) {
+        } else if (fieldName.equals(columnNamesOptions.rowGroupIndex())) {
           arguments.add(String.valueOf(((RowGroupMetadata) metadata).getRowGroupIndex()));
-        } else if (fieldName.equals(rgsField)) {
+        } else if (fieldName.equals(columnNamesOptions.rowGroupStart())) {
           arguments.add(Long.toString(metadata.getStatistic(() -> ExactStatisticsConstants.START)));
-        } else if (fieldName.equals(rglField)) {
+        } else if (fieldName.equals(columnNamesOptions.rowGroupLength())) {
           arguments.add(Long.toString(metadata.getStatistic(() -> ExactStatisticsConstants.LENGTH)));
         } else if (fieldName.equals(MetastoreAnalyzeConstants.METADATA_TYPE)) {
           arguments.add(metadataType.name());
@@ -374,10 +370,6 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
   }
 
   private ResultSetLoader getResultSetLoaderWithBatchSchema() {
-    String lastModifiedTimeField = context.getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL);
-    String rgiField = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_INDEX_COLUMN_LABEL);
-    String rgsField = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_START_COLUMN_LABEL);
-    String rglField = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_LENGTH_COLUMN_LABEL);
     SchemaBuilder schemaBuilder = new SchemaBuilder();
     // adds fields to the schema preserving their order to avoid issues in outcoming batches
     for (VectorWrapper<?> vectorWrapper : container) {
@@ -385,10 +377,10 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
       String fieldName = field.getName();
       if (fieldName.equals(MetastoreAnalyzeConstants.LOCATION_FIELD)
           || fieldName.equals(MetastoreAnalyzeConstants.SCHEMA_FIELD)
-          || fieldName.equals(lastModifiedTimeField)
-          || fieldName.equals(rgiField)
-          || fieldName.equals(rgsField)
-          || fieldName.equals(rglField)
+          || fieldName.equals(columnNamesOptions.lastModifiedTime())
+          || fieldName.equals(columnNamesOptions.rowGroupIndex())
+          || fieldName.equals(columnNamesOptions.rowGroupStart())
+          || fieldName.equals(columnNamesOptions.rowGroupLength())
           || fieldName.equals(MetastoreAnalyzeConstants.METADATA_TYPE)
           || popConfig.getContext().segmentColumns().contains(fieldName)) {
         schemaBuilder.add(fieldName, field.getType().getMinorType(), field.getDataMode());
@@ -416,9 +408,9 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
     container.clear();
     StreamSupport.stream(populatedContainer.spliterator(), false)
         .map(VectorWrapper::getField)
-        .filter(field -> field.getType().getMinorType() != MinorType.NULL)
         .forEach(container::addOrGet);
     container.buildSchema(BatchSchema.SelectionVectorMode.NONE);
+    container.setEmpty();
   }
 
   protected boolean setupNewSchema() {
@@ -445,18 +437,17 @@ public class MetadataHandlerBatch extends AbstractSingleRecordBatch<MetadataHand
   }
 
   private void updateMetadataToHandle() {
-    RowSetReader reader = DirectRowSet.fromContainer(container).reader();
     // updates metadataToHandle to be able to fetch required data which wasn't returned by incoming batch
     if (metadataToHandle != null && !metadataToHandle.isEmpty()) {
+      RowSetReader reader = DirectRowSet.fromContainer(container).reader();
       switch (metadataType) {
         case ROW_GROUP: {
-          String rgiColumnName = context.getOptions().getString(ExecConstants.IMPLICIT_ROW_GROUP_INDEX_COLUMN_LABEL);
           while (reader.next() && !metadataToHandle.isEmpty()) {
             List<String> partitionValues = popConfig.getContext().segmentColumns().stream()
                 .map(columnName -> reader.column(columnName).scalar().getString())
                 .collect(Collectors.toList());
             Path location = new Path(reader.column(MetastoreAnalyzeConstants.LOCATION_FIELD).scalar().getString());
-            int rgi = Integer.parseInt(reader.column(rgiColumnName).scalar().getString());
+            int rgi = Integer.parseInt(reader.column(columnNamesOptions.rowGroupIndex()).scalar().getString());
             metadataToHandle.remove(MetadataIdentifierUtils.getRowGroupMetadataIdentifier(partitionValues, location, rgi));
           }
           break;
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHashAggBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHashAggBatch.java
new file mode 100644
index 0000000..7c83b2d
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHashAggBatch.java
@@ -0,0 +1,56 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.physical.impl.metadata;
+
+import org.apache.drill.common.logical.data.NamedExpression;
+import org.apache.drill.exec.exception.ClassTransformationException;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
+import org.apache.drill.exec.ops.FragmentContext;
+import org.apache.drill.exec.physical.config.MetadataHashAggPOP;
+import org.apache.drill.exec.physical.impl.aggregate.HashAggBatch;
+import org.apache.drill.exec.physical.impl.aggregate.HashAggregator;
+import org.apache.drill.exec.record.RecordBatch;
+
+import java.io.IOException;
+import java.util.List;
+
+public class MetadataHashAggBatch extends HashAggBatch {
+  private List<NamedExpression> valueExpressions;
+
+  public MetadataHashAggBatch(MetadataHashAggPOP popConfig, RecordBatch incoming, FragmentContext context) {
+    super(popConfig, incoming, context);
+  }
+
+  @Override
+  protected HashAggregator createAggregatorInternal()
+      throws SchemaChangeException, ClassTransformationException, IOException {
+    MetadataHashAggPOP popConfig = (MetadataHashAggPOP) this.popConfig;
+
+    valueExpressions = new MetadataAggregateHelper(popConfig.getContext(),
+            new ColumnNamesOptions(context.getOptions()), incoming.getSchema(), popConfig.getPhase())
+        .getValueExpressions();
+
+    return super.createAggregatorInternal();
+  }
+
+  @Override
+  protected List<NamedExpression> getValueExpressions() {
+    return valueExpressions;
+  }
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatchCreator.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHashAggBatchCreator.java
similarity index 80%
copy from exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatchCreator.java
copy to exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHashAggBatchCreator.java
index eda40e9..2563b0e 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatchCreator.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataHashAggBatchCreator.java
@@ -19,7 +19,7 @@ package org.apache.drill.exec.physical.impl.metadata;
 
 import org.apache.drill.common.exceptions.ExecutionSetupException;
 import org.apache.drill.exec.ops.ExecutorFragmentContext;
-import org.apache.drill.exec.physical.config.MetadataAggPOP;
+import org.apache.drill.exec.physical.config.MetadataHashAggPOP;
 import org.apache.drill.exec.physical.impl.BatchCreator;
 import org.apache.drill.exec.record.CloseableRecordBatch;
 import org.apache.drill.exec.record.RecordBatch;
@@ -27,12 +27,12 @@ import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
 
 import java.util.List;
 
-public class MetadataAggBatchCreator implements BatchCreator<MetadataAggPOP> {
+public class MetadataHashAggBatchCreator implements BatchCreator<MetadataHashAggPOP> {
 
   @Override
   public CloseableRecordBatch getBatch(ExecutorFragmentContext context,
-      MetadataAggPOP config, List<RecordBatch> children) throws ExecutionSetupException {
+      MetadataHashAggPOP config, List<RecordBatch> children) throws ExecutionSetupException {
     Preconditions.checkArgument(children.size() == 1);
-    return new MetadataAggBatch(config, children.iterator().next(), context);
+    return new MetadataHashAggBatch(config, children.iterator().next(), context);
   }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataStreamAggBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataStreamAggBatch.java
new file mode 100644
index 0000000..d982179
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataStreamAggBatch.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.physical.impl.metadata;
+
+import org.apache.drill.common.logical.data.NamedExpression;
+import org.apache.drill.exec.exception.ClassTransformationException;
+import org.apache.drill.exec.exception.OutOfMemoryException;
+import org.apache.drill.exec.exception.SchemaChangeException;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
+import org.apache.drill.exec.ops.FragmentContext;
+import org.apache.drill.exec.physical.config.MetadataStreamAggPOP;
+import org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch;
+import org.apache.drill.exec.physical.impl.aggregate.StreamingAggregator;
+import org.apache.drill.exec.record.RecordBatch;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Operator which adds aggregate calls for all incoming columns to calculate required metadata and produces aggregations.
+ * If aggregation is performed on top of another aggregation, required aggregate calls for merging metadata will be added.
+ */
+public class MetadataStreamAggBatch extends StreamingAggBatch {
+
+  private List<NamedExpression> valueExpressions;
+
+  public MetadataStreamAggBatch(MetadataStreamAggPOP popConfig, RecordBatch incoming, FragmentContext context) throws OutOfMemoryException {
+    super(popConfig, incoming, context);
+  }
+
+  @Override
+  protected StreamingAggregator createAggregatorInternal()
+      throws SchemaChangeException, ClassTransformationException, IOException {
+    MetadataStreamAggPOP popConfig = (MetadataStreamAggPOP) this.popConfig;
+
+    valueExpressions = new MetadataAggregateHelper(popConfig.getContext(),
+            new ColumnNamesOptions(context.getOptions()), incoming.getSchema(), popConfig.getPhase())
+        .getValueExpressions();
+
+    return super.createAggregatorInternal();
+  }
+
+  @Override
+  protected List<NamedExpression> getValueExpressions() {
+    return valueExpressions;
+  }
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatchCreator.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataStreamAggBatchCreator.java
similarity index 80%
rename from exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatchCreator.java
rename to exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataStreamAggBatchCreator.java
index eda40e9..15c494b 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataAggBatchCreator.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/metadata/MetadataStreamAggBatchCreator.java
@@ -19,7 +19,7 @@ package org.apache.drill.exec.physical.impl.metadata;
 
 import org.apache.drill.common.exceptions.ExecutionSetupException;
 import org.apache.drill.exec.ops.ExecutorFragmentContext;
-import org.apache.drill.exec.physical.config.MetadataAggPOP;
+import org.apache.drill.exec.physical.config.MetadataStreamAggPOP;
 import org.apache.drill.exec.physical.impl.BatchCreator;
 import org.apache.drill.exec.record.CloseableRecordBatch;
 import org.apache.drill.exec.record.RecordBatch;
@@ -27,12 +27,12 @@ import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
 
 import java.util.List;
 
-public class MetadataAggBatchCreator implements BatchCreator<MetadataAggPOP> {
+public class MetadataStreamAggBatchCreator implements BatchCreator<MetadataStreamAggPOP> {
 
   @Override
   public CloseableRecordBatch getBatch(ExecutorFragmentContext context,
-      MetadataAggPOP config, List<RecordBatch> children) throws ExecutionSetupException {
+      MetadataStreamAggPOP config, List<RecordBatch> children) throws ExecutionSetupException {
     Preconditions.checkArgument(children.size() == 1);
-    return new MetadataAggBatch(config, children.iterator().next(), context);
+    return new MetadataStreamAggBatch(config, children.iterator().next(), context);
   }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/project/ProjectRecordBatch.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/project/ProjectRecordBatch.java
index 674b3ab..185b8f0 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/project/ProjectRecordBatch.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/project/ProjectRecordBatch.java
@@ -516,7 +516,7 @@ public class ProjectRecordBatch extends AbstractSingleRecordBatch<Project> {
         }
 
         // The reference name will be passed to ComplexWriter, used as the name of the output vector from the writer.
-        ((DrillFuncHolderExpr) expr).getFieldReference(namedExpression.getRef());
+        ((DrillFuncHolderExpr) expr).setFieldReference(namedExpression.getRef());
         cg.addExpr(expr, ClassGenerator.BlkCreateMode.TRUE_IF_BOUND);
         if (complexFieldReferencesList == null) {
           complexFieldReferencesList = Lists.newArrayList();
@@ -578,7 +578,7 @@ public class ProjectRecordBatch extends AbstractSingleRecordBatch<Project> {
   }
 
   private boolean isImplicitFileColumn(ValueVector vvIn) {
-    return columnExplorer.isImplicitFileColumn(vvIn.getField().getName());
+    return columnExplorer.isImplicitOrInternalFileColumn(vvIn.getField().getName());
   }
 
   private List<NamedExpression> getExpressionList() {
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java
index 39189b1..8793a65 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/validate/BatchValidator.java
@@ -36,6 +36,10 @@ import org.apache.drill.exec.physical.impl.limit.LimitRecordBatch;
 import org.apache.drill.exec.physical.impl.limit.PartitionLimitRecordBatch;
 import org.apache.drill.exec.physical.impl.mergereceiver.MergingRecordBatch;
 import org.apache.drill.exec.physical.impl.orderedpartitioner.OrderedPartitionRecordBatch;
+import org.apache.drill.exec.physical.impl.metadata.MetadataHashAggBatch;
+import org.apache.drill.exec.physical.impl.metadata.MetadataStreamAggBatch;
+import org.apache.drill.exec.physical.impl.metadata.MetadataControllerBatch;
+import org.apache.drill.exec.physical.impl.metadata.MetadataHandlerBatch;
 import org.apache.drill.exec.physical.impl.project.ProjectRecordBatch;
 import org.apache.drill.exec.physical.impl.protocol.OperatorRecordBatch;
 import org.apache.drill.exec.physical.impl.rangepartitioner.RangePartitionRecordBatch;
@@ -247,6 +251,10 @@ public class BatchValidator {
     rules.put(HashJoinBatch.class, CheckMode.VECTORS);
     rules.put(ExternalSortBatch.class, CheckMode.VECTORS);
     rules.put(WriterRecordBatch.class, CheckMode.VECTORS);
+    rules.put(MetadataStreamAggBatch.class, CheckMode.VECTORS);
+    rules.put(MetadataHashAggBatch.class, CheckMode.VECTORS);
+    rules.put(MetadataHandlerBatch.class, CheckMode.VECTORS);
+    rules.put(MetadataControllerBatch.class, CheckMode.VECTORS);
     return rules;
   }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/resultSet/model/single/BaseReaderBuilder.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/resultSet/model/single/BaseReaderBuilder.java
index 75dfeac..3bbe6ab 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/resultSet/model/single/BaseReaderBuilder.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/resultSet/model/single/BaseReaderBuilder.java
@@ -46,7 +46,7 @@ import org.apache.drill.exec.vector.complex.UnionVector;
  * <p>
  * Derived classes handle the details of the various kinds of readers.
  * Today there is a single subclass that builds (test-time)
- * {@link RowSet} objects. The idea, however, is that we may eventually
+ * {@link org.apache.drill.exec.physical.rowSet.RowSet} objects. The idea, however, is that we may eventually
  * want to create a "result set reader" for use in internal operators,
  * in parallel to the "result set loader". The result set reader would
  * handle a stream of incoming batches. The extant RowSet class handles
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/rowSet/RowSetFormatter.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/rowSet/RowSetFormatter.java
index 4b734e6..9bf9f44 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/rowSet/RowSetFormatter.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/rowSet/RowSetFormatter.java
@@ -25,6 +25,7 @@ import org.apache.commons.io.output.StringBuilderWriter;
 import org.apache.drill.common.exceptions.DrillRuntimeException;
 import org.apache.drill.exec.physical.impl.protocol.BatchAccessor;
 import org.apache.drill.exec.record.BatchSchema.SelectionVectorMode;
+import org.apache.drill.exec.record.RecordBatch;
 import org.apache.drill.exec.record.VectorContainer;
 import org.apache.drill.exec.record.metadata.TupleMetadata;
 
@@ -60,6 +61,10 @@ public class RowSetFormatter {
     RowSets.wrap(batch).print();
   }
 
+  public static void print(RecordBatch batch) {
+    RowSets.wrap(batch).print();
+  }
+
   public static String toString(RowSet rowSet) {
     StringBuilderWriter out = new StringBuilderWriter();
     new RowSetFormatter(rowSet, out).write();
@@ -116,18 +121,18 @@ public class RowSetFormatter {
   }
 
   private void writeHeader(Writer writer, RowSetReader reader, SelectionVectorMode selectionMode) throws IOException {
-    writer.write(Integer.toString(reader.logicalIndex()));
+    writer.write(String.valueOf(reader.logicalIndex()));
     switch (selectionMode) {
       case FOUR_BYTE:
         writer.write(" (");
-        writer.write(reader.hyperVectorIndex());
+        writer.write(String.valueOf(reader.hyperVectorIndex()));
         writer.write(", ");
-        writer.write(Integer.toString(reader.offset()));
+        writer.write(String.valueOf(reader.offset()));
         writer.write(")");
         break;
       case TWO_BYTE:
         writer.write(" (");
-        writer.write(Integer.toString(reader.offset()));
+        writer.write(String.valueOf(reader.offset()));
         writer.write(")");
         break;
       default:
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/PlannerPhase.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/PlannerPhase.java
index a3e907b..203150f 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/PlannerPhase.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/PlannerPhase.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.exec.planner;
 
+import org.apache.drill.exec.planner.logical.ConvertMetadataAggregateToDirectScanRule;
 import org.apache.drill.exec.planner.physical.MetadataAggPrule;
 import org.apache.drill.exec.planner.physical.MetadataControllerPrule;
 import org.apache.drill.exec.planner.physical.MetadataHandlerPrule;
@@ -525,6 +526,7 @@ public enum PlannerPhase {
     ruleList.add(MetadataControllerPrule.INSTANCE);
     ruleList.add(MetadataHandlerPrule.INSTANCE);
     ruleList.add(MetadataAggPrule.INSTANCE);
+    ruleList.add(ConvertMetadataAggregateToDirectScanRule.INSTANCE);
 
     ruleList.add(UnnestPrule.INSTANCE);
     ruleList.add(LateralJoinPrule.INSTANCE);
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertCountToDirectScanRule.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertCountToDirectScanRule.java
index fb29cda..64a0b68 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertCountToDirectScanRule.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertCountToDirectScanRule.java
@@ -102,7 +102,7 @@ public class ConvertCountToDirectScanRule extends RelOptRule {
   private static final Logger logger = LoggerFactory.getLogger(ConvertCountToDirectScanRule.class);
 
   private ConvertCountToDirectScanRule(RelOptRuleOperand rule, String id) {
-    super(rule, "ConvertCountToDirectScanRule:" + id);
+    super(rule, DrillRelFactories.LOGICAL_BUILDER, "ConvertCountToDirectScanRule:" + id);
   }
 
   @Override
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertMetadataAggregateToDirectScanRule.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertMetadataAggregateToDirectScanRule.java
new file mode 100644
index 0000000..43f6383
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/ConvertMetadataAggregateToDirectScanRule.java
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.logical;
+
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelNode;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.expr.IsPredicate;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
+import org.apache.drill.exec.metastore.analyze.AnalyzeColumnUtils;
+import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
+import org.apache.drill.exec.metastore.analyze.MetastoreAnalyzeConstants;
+import org.apache.drill.exec.physical.base.GroupScan;
+import org.apache.drill.exec.physical.base.ScanStats;
+import org.apache.drill.exec.planner.physical.PlannerSettings;
+import org.apache.drill.exec.planner.physical.PrelUtil;
+import org.apache.drill.exec.store.ColumnExplorer;
+import org.apache.drill.exec.store.ColumnExplorer.ImplicitFileColumns;
+import org.apache.drill.exec.store.dfs.DrillFileSystem;
+import org.apache.drill.exec.store.dfs.FormatSelection;
+import org.apache.drill.exec.store.direct.DirectGroupScan;
+import org.apache.drill.exec.store.parquet.ParquetGroupScan;
+import org.apache.drill.exec.store.pojo.DynamicPojoRecordReader;
+import org.apache.drill.exec.util.ImpersonationUtil;
+import org.apache.drill.exec.util.Utilities;
+import org.apache.drill.metastore.metadata.MetadataType;
+import org.apache.drill.metastore.metadata.RowGroupMetadata;
+import org.apache.drill.metastore.statistics.ColumnStatistics;
+import org.apache.drill.metastore.statistics.ColumnStatisticsKind;
+import org.apache.drill.metastore.statistics.ExactStatisticsConstants;
+import org.apache.drill.metastore.statistics.StatisticsKind;
+import org.apache.drill.metastore.statistics.TableStatisticsKind;
+import org.apache.drill.shaded.guava.com.google.common.collect.HashBasedTable;
+import org.apache.drill.shaded.guava.com.google.common.collect.Multimap;
+import org.apache.drill.shaded.guava.com.google.common.collect.Table;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.function.Function;
+import java.util.function.IntFunction;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/**
+ * Rule which converts
+ *
+ * <pre>
+ *   MetadataAggRel(metadataLevel=ROW_GROUP)
+ *   \
+ *   DrillScanRel
+ * </pre>
+ * <p/>
+ * plan into
+ * <pre>
+ *   DrillDirectScanRel
+ * </pre>
+ * where {@link DrillDirectScanRel} is populated with row group metadata.
+ * <p/>
+ * For the case when aggregate level is not ROW_GROUP, resulting plan will be the following:
+ *
+ * <pre>
+ *   MetadataAggRel(metadataLevel=FILE (or another non-ROW_GROUP value), createNewAggregations=false)
+ *   \
+ *   DrillDirectScanRel
+ * </pre>
+ */
+public class ConvertMetadataAggregateToDirectScanRule extends RelOptRule {
+  public static final ConvertMetadataAggregateToDirectScanRule INSTANCE =
+      new ConvertMetadataAggregateToDirectScanRule();
+
+  private static final Logger logger = LoggerFactory.getLogger(ConvertMetadataAggregateToDirectScanRule.class);
+
+  public ConvertMetadataAggregateToDirectScanRule() {
+    super(
+        RelOptHelper.some(MetadataAggRel.class, RelOptHelper.any(DrillScanRel.class)),
+        DrillRelFactories.LOGICAL_BUILDER, "ConvertMetadataAggregateToDirectScanRule");
+  }
+
+  @Override
+  public void onMatch(RelOptRuleCall call) {
+    MetadataAggRel agg = call.rel(0);
+    DrillScanRel scan = call.rel(1);
+
+    GroupScan oldGrpScan = scan.getGroupScan();
+    PlannerSettings settings = PrelUtil.getPlannerSettings(call.getPlanner());
+
+    // Only apply the rule for parquet group scan and for the case when required column metadata is present
+    if (!(oldGrpScan instanceof ParquetGroupScan)
+        || (oldGrpScan.getTableMetadata().getInterestingColumns() != null
+          && !oldGrpScan.getTableMetadata().getInterestingColumns().containsAll(agg.getContext().interestingColumns()))) {
+      return;
+    }
+
+    try {
+      DirectGroupScan directScan = buildDirectScan(agg.getContext().interestingColumns(), scan, settings);
+      if (directScan == null) {
+        logger.warn("Unable to use parquet metadata for ANALYZE since some required metadata is absent within parquet metadata");
+        return;
+      }
+
+      RelNode converted = new DrillDirectScanRel(scan.getCluster(), scan.getTraitSet().plus(DrillRel.DRILL_LOGICAL),
+          directScan, scan.getRowType());
+      if (agg.getContext().metadataLevel() != MetadataType.ROW_GROUP) {
+        MetadataAggregateContext updatedContext = agg.getContext().toBuilder()
+            .createNewAggregations(false)
+            .build();
+        converted = new MetadataAggRel(agg.getCluster(), agg.getTraitSet(), converted, updatedContext);
+      }
+
+      call.transformTo(converted);
+    } catch (Exception e) {
+      logger.warn("Unable to use parquet metadata for ANALYZE: {}", e.getMessage(), e);
+    }
+  }
+
+  private DirectGroupScan buildDirectScan(List<SchemaPath> interestingColumns, DrillScanRel scan, PlannerSettings settings) throws IOException {
+    DrillTable drillTable = Utilities.getDrillTable(scan.getTable());
+
+    ColumnNamesOptions columnNamesOptions = new ColumnNamesOptions(settings.getOptions());
+
+    // populates schema to be used when adding record values
+    FormatSelection selection = (FormatSelection) drillTable.getSelection();
+
+    // adds partition columns to the schema
+    Map<String, Class<?>> schema = ColumnExplorer.getPartitionColumnNames(selection.getSelection(), columnNamesOptions).stream()
+        .collect(Collectors.toMap(
+            Function.identity(),
+            s -> String.class,
+            (o, n) -> n));
+
+    // adds internal implicit columns to the schema
+
+    schema.put(MetastoreAnalyzeConstants.SCHEMA_FIELD, String.class);
+    schema.put(MetastoreAnalyzeConstants.LOCATION_FIELD, String.class);
+    schema.put(columnNamesOptions.rowGroupIndex(), String.class);
+    schema.put(columnNamesOptions.rowGroupStart(), String.class);
+    schema.put(columnNamesOptions.rowGroupLength(), String.class);
+    schema.put(columnNamesOptions.lastModifiedTime(), String.class);
+
+    return populateRecords(interestingColumns, schema, scan, columnNamesOptions);
+  }
+
+  /**
+   * Populates records list with row group metadata.
+   */
+  private DirectGroupScan populateRecords(Collection<SchemaPath> interestingColumns, Map<String, Class<?>> schema,
+      DrillScanRel scan, ColumnNamesOptions columnNamesOptions) throws IOException {
+    ParquetGroupScan parquetGroupScan = (ParquetGroupScan) scan.getGroupScan();
+    DrillTable drillTable = Utilities.getDrillTable(scan.getTable());
+
+    Multimap<Path, RowGroupMetadata> rowGroupsMetadataMap = parquetGroupScan.getMetadataProvider().getRowGroupsMetadataMap();
+
+    Table<String, Integer, Object> recordsTable = HashBasedTable.create();
+    FormatSelection selection = (FormatSelection) drillTable.getSelection();
+    List<String> partitionColumnNames = ColumnExplorer.getPartitionColumnNames(selection.getSelection(), columnNamesOptions);
+
+    FileSystem rawFs = selection.getSelection().getSelectionRoot().getFileSystem(new Configuration());
+    DrillFileSystem fileSystem =
+        ImpersonationUtil.createFileSystem(ImpersonationUtil.getProcessUserName(), rawFs.getConf());
+
+    int rowIndex = 0;
+    for (Map.Entry<Path, RowGroupMetadata> rgEntry : rowGroupsMetadataMap.entries()) {
+      Path path = rgEntry.getKey();
+      RowGroupMetadata rowGroupMetadata = rgEntry.getValue();
+      List<String> partitionValues = ColumnExplorer.listPartitionValues(path, selection.getSelection().getSelectionRoot(), false);
+      for (int i = 0; i < partitionValues.size(); i++) {
+        String partitionColumnName = partitionColumnNames.get(i);
+        recordsTable.put(partitionColumnName, rowIndex, partitionValues.get(i));
+      }
+
+      recordsTable.put(MetastoreAnalyzeConstants.LOCATION_FIELD, rowIndex, ImplicitFileColumns.FQN.getValue(path));
+      recordsTable.put(columnNamesOptions.rowGroupIndex(), rowIndex, String.valueOf(rowGroupMetadata.getRowGroupIndex()));
+
+      if (interestingColumns == null) {
+        interestingColumns = rowGroupMetadata.getColumnsStatistics().keySet();
+      }
+
+      // populates record list with row group column metadata
+      for (SchemaPath schemaPath : interestingColumns) {
+        ColumnStatistics columnStatistics = rowGroupMetadata.getColumnsStatistics().get(schemaPath);
+        if (IsPredicate.isNullOrEmpty(columnStatistics)) {
+          logger.debug("Statistics for {} column wasn't found within {} row group.", schemaPath, path);
+          return null;
+        }
+        for (StatisticsKind statisticsKind : AnalyzeColumnUtils.COLUMN_STATISTICS_FUNCTIONS.keySet()) {
+          Object statsValue;
+          if (statisticsKind.getName().equalsIgnoreCase(TableStatisticsKind.ROW_COUNT.getName())) {
+            statsValue = TableStatisticsKind.ROW_COUNT.getValue(rowGroupMetadata);
+          } else if (statisticsKind.getName().equalsIgnoreCase(ColumnStatisticsKind.NON_NULL_COUNT.getName())) {
+            statsValue = TableStatisticsKind.ROW_COUNT.getValue(rowGroupMetadata) - ColumnStatisticsKind.NULLS_COUNT.getFrom(columnStatistics);
+          } else {
+            statsValue = columnStatistics.get(statisticsKind);
+          }
+          String columnStatisticsFieldName = AnalyzeColumnUtils.getColumnStatisticsFieldName(schemaPath.getRootSegmentPath(), statisticsKind);
+          if (statsValue != null) {
+            schema.putIfAbsent(
+                columnStatisticsFieldName,
+                statsValue.getClass());
+            recordsTable.put(columnStatisticsFieldName, rowIndex, statsValue);
+          }
+        }
+      }
+
+      // populates record list with row group metadata
+      for (StatisticsKind<?> statisticsKind : AnalyzeColumnUtils.META_STATISTICS_FUNCTIONS.keySet()) {
+        String metadataStatisticsFieldName = AnalyzeColumnUtils.getMetadataStatisticsFieldName(statisticsKind);
+        Object statisticsValue = rowGroupMetadata.getStatistic(statisticsKind);
+
+        if (statisticsValue != null) {
+          schema.putIfAbsent(metadataStatisticsFieldName, statisticsValue.getClass());
+          recordsTable.put(metadataStatisticsFieldName, rowIndex, statisticsValue);
+        }
+      }
+
+      // populates record list internal columns
+      recordsTable.put(MetastoreAnalyzeConstants.SCHEMA_FIELD, rowIndex, rowGroupMetadata.getSchema().jsonString());
+      recordsTable.put(columnNamesOptions.rowGroupStart(), rowIndex, Long.toString(rowGroupMetadata.getStatistic(() -> ExactStatisticsConstants.START)));
+      recordsTable.put(columnNamesOptions.rowGroupLength(), rowIndex, Long.toString(rowGroupMetadata.getStatistic(() -> ExactStatisticsConstants.LENGTH)));
+      recordsTable.put(columnNamesOptions.lastModifiedTime(), rowIndex, String.valueOf(fileSystem.getFileStatus(path).getModificationTime()));
+
+      rowIndex++;
+    }
+
+    // DynamicPojoRecordReader requires LinkedHashMap with fields order
+    // which corresponds to the value position in record list.
+    LinkedHashMap<String, Class<?>> orderedSchema = recordsTable.rowKeySet().stream()
+        .collect(Collectors.toMap(
+            Function.identity(),
+            column -> schema.getOrDefault(column, Integer.class),
+            (o, n) -> n,
+            LinkedHashMap::new));
+
+    IntFunction<List<Object>> collectRecord = currentIndex -> orderedSchema.keySet().stream()
+        .map(column -> recordsTable.get(column, currentIndex))
+        .collect(Collectors.toList());
+
+    List<List<Object>> records = IntStream.range(0, rowIndex)
+        .mapToObj(collectRecord)
+        .collect(Collectors.toList());
+
+    DynamicPojoRecordReader<?> reader = new DynamicPojoRecordReader<>(orderedSchema, records);
+
+    ScanStats scanStats = new ScanStats(ScanStats.GroupScanProperty.EXACT_ROW_COUNT, records.size(), 1, schema.size());
+
+    return new DirectGroupScan(reader, scanStats);
+  }
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/DrillDistributionTrait.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/DrillDistributionTrait.java
index 5683c77..6673c63 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/DrillDistributionTrait.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/DrillDistributionTrait.java
@@ -21,11 +21,11 @@ import org.apache.calcite.plan.RelOptPlanner;
 import org.apache.calcite.plan.RelTrait;
 import org.apache.calcite.plan.RelTraitDef;
 
-import org.apache.drill.shaded.guava.com.google.common.collect.ImmutableList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Objects;
 
 public class DrillDistributionTrait implements RelTrait {
-  public static enum DistributionType {SINGLETON, HASH_DISTRIBUTED, RANGE_DISTRIBUTED, RANDOM_DISTRIBUTED,
-                                       ROUND_ROBIN_DISTRIBUTED, BROADCAST_DISTRIBUTED, ANY};
 
   public static DrillDistributionTrait SINGLETON = new DrillDistributionTrait(DistributionType.SINGLETON);
   public static DrillDistributionTrait RANDOM_DISTRIBUTED = new DrillDistributionTrait(DistributionType.RANDOM_DISTRIBUTED);
@@ -33,24 +33,24 @@ public class DrillDistributionTrait implements RelTrait {
 
   public static DrillDistributionTrait DEFAULT = ANY;
 
-  private DistributionType type;
-  private final ImmutableList<DistributionField> fields;
+  private final DistributionType type;
+  private final List<DistributionField> fields;
   private PartitionFunction partitionFunction = null;
 
   public DrillDistributionTrait(DistributionType type) {
     assert (type == DistributionType.SINGLETON || type == DistributionType.RANDOM_DISTRIBUTED || type == DistributionType.ANY
             || type == DistributionType.ROUND_ROBIN_DISTRIBUTED || type == DistributionType.BROADCAST_DISTRIBUTED);
     this.type = type;
-    this.fields = ImmutableList.<DistributionField>of();
+    this.fields = Collections.emptyList();
   }
 
-  public DrillDistributionTrait(DistributionType type, ImmutableList<DistributionField> fields) {
+  public DrillDistributionTrait(DistributionType type, List<DistributionField> fields) {
     assert (type == DistributionType.HASH_DISTRIBUTED || type == DistributionType.RANGE_DISTRIBUTED);
     this.type = type;
     this.fields = fields;
   }
 
-  public DrillDistributionTrait(DistributionType type, ImmutableList<DistributionField> fields,
+  public DrillDistributionTrait(DistributionType type, List<DistributionField> fields,
       PartitionFunction partitionFunction) {
     assert (type == DistributionType.HASH_DISTRIBUTED || type == DistributionType.RANGE_DISTRIBUTED);
     this.type = type;
@@ -105,7 +105,7 @@ public class DrillDistributionTrait implements RelTrait {
     return this.type;
   }
 
-  public ImmutableList<DistributionField> getFields() {
+  public List<DistributionField> getFields() {
     return fields;
   }
 
@@ -114,12 +114,7 @@ public class DrillDistributionTrait implements RelTrait {
   }
 
   private boolean arePartitionFunctionsSame(PartitionFunction f1, PartitionFunction f2) {
-    if (f1 != null && f2 != null) {
-      return f1.equals(f2);
-    } else if (f2 == null && f2 == null) {
-      return true;
-    }
-    return false;
+    return Objects.equals(f1, f2);
   }
 
   @Override
@@ -145,6 +140,15 @@ public class DrillDistributionTrait implements RelTrait {
     return fields == null ? this.type.toString() : this.type.toString() + "(" + fields + ")";
   }
 
+  public enum DistributionType {
+    SINGLETON,
+    HASH_DISTRIBUTED,
+    RANGE_DISTRIBUTED,
+    RANDOM_DISTRIBUTED,
+    ROUND_ROBIN_DISTRIBUTED,
+    BROADCAST_DISTRIBUTED,
+    ANY
+  }
 
   public static class DistributionField {
     /**
@@ -180,4 +184,51 @@ public class DrillDistributionTrait implements RelTrait {
     }
   }
 
+  /**
+   * Stores distribution field index and field name to be used in exchange operators.
+   * Field name is required for the case of dynamic schema discovering
+   * when field is not present in rel data type at planning time.
+   */
+  public static class NamedDistributionField extends DistributionField {
+    /**
+     * Name of the field being DISTRIBUTED.
+     */
+    private final String fieldName;
+
+    public NamedDistributionField(int fieldId, String fieldName) {
+      super(fieldId);
+      this.fieldName = fieldName;
+    }
+
+    public String getFieldName() {
+      return fieldName;
+    }
+
+    @Override
+    public boolean equals(Object o) {
+      if (this == o) {
+        return true;
+      }
+      if (o == null || getClass() != o.getClass()) {
+        return false;
+      }
+      if (!super.equals(o)) {
+        return false;
+      }
+
+      NamedDistributionField that = (NamedDistributionField) o;
+
+      return Objects.equals(fieldName, that.fieldName);
+    }
+
+    @Override
+    public int hashCode() {
+      return Objects.hash(super.hashCode(), fieldName);
+    }
+
+    @Override
+    public String toString() {
+      return String.format("%s(%s)", fieldName, getFieldId());
+    }
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashAggPrule.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashAggPrule.java
index 7300a80..fdc03ef 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashAggPrule.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashAggPrule.java
@@ -18,7 +18,6 @@
 package org.apache.drill.exec.planner.physical;
 
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
-import org.apache.calcite.rel.core.AggregateCall;
 import org.apache.calcite.util.ImmutableBitSet;
 import org.apache.drill.exec.planner.logical.DrillAggregateRel;
 import org.apache.drill.exec.planner.logical.RelOptHelper;
@@ -59,8 +58,7 @@ public class HashAggPrule extends AggPruleBase {
     final DrillAggregateRel aggregate = call.rel(0);
     final RelNode input = call.rel(1);
 
-    if (aggregate.containsDistinctCall() || aggregate.getGroupCount() == 0
-        || requiresStreamingAgg(aggregate)) {
+    if (aggregate.containsDistinctCall() || aggregate.getGroupCount() == 0) {
       // currently, don't use HashAggregate if any of the logical aggrs contains DISTINCT or
       // if there are no grouping keys
       return;
@@ -103,16 +101,6 @@ public class HashAggPrule extends AggPruleBase {
     }
   }
 
-  private boolean requiresStreamingAgg(DrillAggregateRel aggregate) {
-    //If contains ANY_VALUE aggregate, using HashAgg would not work
-    for (AggregateCall agg : aggregate.getAggCallList()) {
-      if (agg.getAggregation().getName().equalsIgnoreCase("any_value")) {
-        return true;
-      }
-    }
-    return false;
-  }
-
   private class TwoPhaseSubset extends SubsetTransformer<DrillAggregateRel, InvalidRelException> {
     final RelTrait distOnAllKeys;
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashPrelUtil.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashPrelUtil.java
index 494eb82..a7e8cc0 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashPrelUtil.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/HashPrelUtil.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.exec.planner.physical;
 
+import org.apache.drill.exec.planner.physical.DrillDistributionTrait.NamedDistributionField;
 import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
 import org.apache.drill.shaded.guava.com.google.common.collect.ImmutableList;
 import org.apache.calcite.rel.type.RelDataType;
@@ -52,13 +53,8 @@ public class HashPrelUtil {
   /**
    * Implementation of {@link HashExpressionCreatorHelper} for {@link LogicalExpression} type.
    */
-  public static HashExpressionCreatorHelper<LogicalExpression> HASH_HELPER_LOGICALEXPRESSION =
-      new HashExpressionCreatorHelper<LogicalExpression>() {
-        @Override
-        public LogicalExpression createCall(String funcName, List<LogicalExpression> inputFiled) {
-          return new FunctionCall(funcName, inputFiled, ExpressionPosition.UNKNOWN);
-        }
-      };
+  public static HashExpressionCreatorHelper<LogicalExpression> HASH_HELPER_LOGICAL_EXPRESSION =
+      (funcName, inputFiled) -> new FunctionCall(funcName, inputFiled, ExpressionPosition.UNKNOWN);
 
   public static class RexNodeBasedHashExpressionCreatorHelper implements HashExpressionCreatorHelper<RexNode> {
     private final RexBuilder rexBuilder;
@@ -161,7 +157,7 @@ public class HashPrelUtil {
    * @return hash expression
    */
   public static LogicalExpression getHash64Expression(LogicalExpression field, LogicalExpression seed, boolean hashAsDouble) {
-    return createHash64Expression(ImmutableList.of(field), seed, HASH_HELPER_LOGICALEXPRESSION, hashAsDouble);
+    return createHash64Expression(ImmutableList.of(field), seed, HASH_HELPER_LOGICAL_EXPRESSION, hashAsDouble);
   }
 
   /**
@@ -173,33 +169,38 @@ public class HashPrelUtil {
    * @return hash expression
    */
   public static LogicalExpression getHashExpression(LogicalExpression field, LogicalExpression seed, boolean hashAsDouble) {
-    return createHashExpression(ImmutableList.of(field), seed, HASH_HELPER_LOGICALEXPRESSION, hashAsDouble);
+    return createHashExpression(ImmutableList.of(field), seed, HASH_HELPER_LOGICAL_EXPRESSION, hashAsDouble);
   }
 
 
   /**
    * Create a distribution hash expression.
    *
-   * @param fields Distribution fields
+   * @param fields  Distribution fields
    * @param rowType Row type
-   * @return
+   * @return distribution hash expression
    */
   public static LogicalExpression getHashExpression(List<DistributionField> fields, RelDataType rowType) {
     assert fields.size() > 0;
 
-    final List<String> childFields = rowType.getFieldNames();
+    List<String> childFields = rowType.getFieldNames();
 
     // If we already included a field with hash - no need to calculate hash further down
-    if ( childFields.contains(HASH_EXPR_NAME)) {
+    if (childFields.contains(HASH_EXPR_NAME)) {
       return new FieldReference(HASH_EXPR_NAME);
     }
 
-    final List<LogicalExpression> expressions = new ArrayList<LogicalExpression>(childFields.size());
-    for(int i =0; i < fields.size(); i++){
-      expressions.add(new FieldReference(childFields.get(fields.get(i).getFieldId()), ExpressionPosition.UNKNOWN));
+    List<LogicalExpression> expressions = new ArrayList<>();
+    for (DistributionField field : fields) {
+      if (field instanceof NamedDistributionField) {
+        NamedDistributionField namedDistributionField = (NamedDistributionField) field;
+        expressions.add(new FieldReference(namedDistributionField.getFieldName(), ExpressionPosition.UNKNOWN));
+      } else {
+        expressions.add(new FieldReference(childFields.get(field.getFieldId()), ExpressionPosition.UNKNOWN));
+      }
     }
 
-    final LogicalExpression distSeed = ValueExpressions.getInt(DIST_SEED);
-    return createHashBasedPartitionExpression(expressions, distSeed, HASH_HELPER_LOGICALEXPRESSION);
+    LogicalExpression distSeed = ValueExpressions.getInt(DIST_SEED);
+    return createHashBasedPartitionExpression(expressions, distSeed, HASH_HELPER_LOGICAL_EXPRESSION);
   }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrule.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrule.java
index 2ce6e46..9a9e11c 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrule.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrule.java
@@ -20,15 +20,26 @@ package org.apache.drill.exec.planner.physical;
 import org.apache.calcite.plan.RelOptRuleCall;
 import org.apache.calcite.plan.RelTraitSet;
 import org.apache.calcite.rel.RelCollation;
+import org.apache.calcite.rel.RelCollationImpl;
 import org.apache.calcite.rel.RelCollations;
 import org.apache.calcite.rel.RelFieldCollation;
 import org.apache.calcite.rel.RelNode;
+import org.apache.drill.common.expression.FieldReference;
+import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.common.logical.data.NamedExpression;
 import org.apache.drill.exec.planner.logical.DrillRel;
 import org.apache.drill.exec.planner.logical.MetadataAggRel;
 import org.apache.drill.exec.planner.logical.RelOptHelper;
+import org.apache.drill.exec.planner.physical.AggPrelBase.OperatorPhase;
+import org.apache.drill.exec.planner.physical.DrillDistributionTrait.NamedDistributionField;
+import org.apache.drill.exec.store.parquet.FilterEvaluatorUtils.FieldReferenceFinder;
+import org.apache.drill.shaded.guava.com.google.common.collect.ImmutableList;
 
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Set;
 import java.util.stream.Collectors;
-import java.util.stream.IntStream;
 
 public class MetadataAggPrule extends Prule {
   public static final MetadataAggPrule INSTANCE = new MetadataAggPrule();
@@ -40,21 +51,178 @@ public class MetadataAggPrule extends Prule {
 
   @Override
   public void onMatch(RelOptRuleCall call) {
-    MetadataAggRel relNode = call.rel(0);
-    RelNode input = relNode.getInput();
-
-    int groupByExprsSize = relNode.getContext().groupByExpressions().size();
-
-    // group by expressions will be returned first
-    RelCollation collation = RelCollations.of(IntStream.range(1, groupByExprsSize)
-        .mapToObj(RelFieldCollation::new)
-        .collect(Collectors.toList()));
-
-    // TODO: update DrillDistributionTrait when implemented parallelization for metadata collecting (see DRILL-7433)
-    RelTraitSet traits = call.getPlanner().emptyTraitSet().plus(Prel.DRILL_PHYSICAL).plus(DrillDistributionTrait.SINGLETON);
-    traits = groupByExprsSize > 0 ? traits.plus(collation) : traits;
-    RelNode convertedInput = convert(input, traits);
-    call.transformTo(
-        new MetadataAggPrel(relNode.getCluster(), traits, convertedInput, relNode.getContext()));
+    MetadataAggRel aggregate = call.rel(0);
+    RelNode input = aggregate.getInput();
+
+    int groupByExprsSize = aggregate.getContext().groupByExpressions().size();
+
+    List<RelFieldCollation> collations = new ArrayList<>();
+    List<String> names = new ArrayList<>();
+    for (int i = 0; i < groupByExprsSize; i++) {
+      collations.add(new RelFieldCollation(i + 1));
+      SchemaPath fieldPath = getArgumentReference(aggregate.getContext().groupByExpressions().get(i));
+      names.add(fieldPath.getRootSegmentPath());
+    }
+
+    RelCollation collation = new NamedRelCollation(collations, names);
+
+    RelTraitSet traits;
+
+    if (aggregate.getContext().groupByExpressions().isEmpty()) {
+      DrillDistributionTrait singleDist = DrillDistributionTrait.SINGLETON;
+      RelTraitSet singleDistTrait = call.getPlanner().emptyTraitSet().plus(Prel.DRILL_PHYSICAL).plus(singleDist);
+
+      createTransformRequest(call, aggregate, input, singleDistTrait);
+    } else {
+      // hash distribute on all grouping keys
+      DrillDistributionTrait distOnAllKeys =
+          new DrillDistributionTrait(DrillDistributionTrait.DistributionType.HASH_DISTRIBUTED,
+              ImmutableList.copyOf(getDistributionFields(aggregate.getContext().groupByExpressions())));
+
+      PlannerSettings settings = PrelUtil.getPlannerSettings(call.getPlanner());
+      boolean smallInput =
+          input.estimateRowCount(input.getCluster().getMetadataQuery()) < settings.getSliceTarget();
+
+      // force 2-phase aggregation for bottom aggregate call
+      // to produce sort locally before aggregation is produced for large inputs
+      if (aggregate.getContext().createNewAggregations() && !smallInput) {
+        traits = call.getPlanner().emptyTraitSet().plus(Prel.DRILL_PHYSICAL);
+        RelNode convertedInput = convert(input, traits);
+
+        new TwoPhaseMetadataAggSubsetTransformer(call, collation, distOnAllKeys)
+            .go(aggregate, convertedInput);
+      } else {
+        // TODO: DRILL-7433 - replace DrillDistributionTrait.SINGLETON with distOnAllKeys when parallelization for MetadataHandler is implemented
+        traits = call.getPlanner().emptyTraitSet().plus(Prel.DRILL_PHYSICAL).plus(collation).plus(DrillDistributionTrait.SINGLETON);
+        createTransformRequest(call, aggregate, input, traits);
+      }
+    }
+  }
+
+  private void createTransformRequest(RelOptRuleCall call, MetadataAggRel aggregate,
+      RelNode input, RelTraitSet traits) {
+
+    RelNode convertedInput = convert(input, PrelUtil.fixTraits(call, traits));
+
+    MetadataStreamAggPrel newAgg = new MetadataStreamAggPrel(
+        aggregate.getCluster(),
+        traits,
+        convertedInput,
+        aggregate.getContext(),
+        OperatorPhase.PHASE_1of1);
+
+    call.transformTo(newAgg);
+  }
+
+  /**
+   * Returns list with named distribution fields which correspond to specified expressions arguments.
+   *
+   * @param namedExpressions expressions list
+   * @return list of {@link NamedDistributionField} instances
+   */
+  private static List<NamedDistributionField> getDistributionFields(List<NamedExpression> namedExpressions) {
+    List<NamedDistributionField> distributionFields = new ArrayList<>();
+    int groupByExprsSize = namedExpressions.size();
+
+    for (int index = 0; index < groupByExprsSize; index++) {
+      SchemaPath fieldPath = getArgumentReference(namedExpressions.get(index));
+      NamedDistributionField field =
+          new NamedDistributionField(index + 1, fieldPath.getRootSegmentPath());
+      distributionFields.add(field);
+    }
+
+    return distributionFields;
+  }
+
+  /**
+   * Returns {@link FieldReference} instance which corresponds to the argument of specified {@code namedExpression}.
+   *
+   * @param namedExpression expression
+   * @return {@link FieldReference} instance
+   */
+  private static FieldReference getArgumentReference(NamedExpression namedExpression) {
+    Set<SchemaPath> arguments = namedExpression.getExpr().accept(FieldReferenceFinder.INSTANCE, null);
+    assert arguments.size() == 1 : "Group by expression contains more than one argument";
+    return new FieldReference(arguments.iterator().next());
+  }
+
+  /**
+   * Implementation of {@link RelCollationImpl} with field name.
+   * Stores {@link RelFieldCollation} list and corresponding field names to be used in sort operators.
+   * Field name is required for the case of dynamic schema discovering
+   * when field is not present in rel data type at planning time.
+   */
+  public static class NamedRelCollation extends RelCollationImpl {
+    private final List<String> names;
+
+    protected NamedRelCollation(List<RelFieldCollation> fieldCollations, List<String> names) {
+      super(com.google.common.collect.ImmutableList.copyOf(fieldCollations));
+      this.names = Collections.unmodifiableList(names);
+    }
+
+    public String getName(int collationIndex) {
+      return names.get(collationIndex - 1);
+    }
+  }
+
+  /**
+   * {@link SubsetTransformer} for creating two-phase metadata aggregation.
+   */
+  private static class TwoPhaseMetadataAggSubsetTransformer
+      extends SubsetTransformer<MetadataAggRel, RuntimeException> {
+
+    private final RelCollation collation;
+    private final DrillDistributionTrait distributionTrait;
+
+    public TwoPhaseMetadataAggSubsetTransformer(RelOptRuleCall call,
+        RelCollation collation, DrillDistributionTrait distributionTrait) {
+      super(call);
+      this.collation = collation;
+      this.distributionTrait = distributionTrait;
+    }
+
+    @Override
+    public RelNode convertChild(MetadataAggRel aggregate, RelNode child) {
+      DrillDistributionTrait toDist = child.getTraitSet().getTrait(DrillDistributionTraitDef.INSTANCE);
+      RelTraitSet traits = newTraitSet(Prel.DRILL_PHYSICAL, RelCollations.EMPTY, toDist);
+      RelNode newInput = convert(child, traits);
+
+      // maps group by expressions to themselves to be able to produce the second aggregation
+      List<NamedExpression> identityExpressions = aggregate.getContext().groupByExpressions().stream()
+          .map(namedExpression -> new NamedExpression(namedExpression.getExpr(), getArgumentReference(namedExpression)))
+          .collect(Collectors.toList());
+
+      // use hash aggregation for the first stage to avoid sorting raw data
+      MetadataHashAggPrel phase1Agg = new MetadataHashAggPrel(
+          aggregate.getCluster(),
+          traits,
+          newInput,
+          aggregate.getContext().toBuilder().groupByExpressions(identityExpressions).build(),
+          OperatorPhase.PHASE_1of2);
+
+      traits = newTraitSet(Prel.DRILL_PHYSICAL, collation, toDist).plus(distributionTrait);
+      SortPrel sort = new SortPrel(
+          aggregate.getCluster(),
+          traits,
+          phase1Agg,
+          (RelCollation) traits.getTrait(collation.getTraitDef()));
+
+      int numEndPoints = PrelUtil.getSettings(phase1Agg.getCluster()).numEndPoints();
+
+      HashToMergeExchangePrel exch =
+          new HashToMergeExchangePrel(phase1Agg.getCluster(),
+              traits,
+              sort,
+              ImmutableList.copyOf(getDistributionFields(aggregate.getContext().groupByExpressions())),
+              collation,
+              numEndPoints);
+
+      return new MetadataStreamAggPrel(
+          aggregate.getCluster(),
+          newTraitSet(Prel.DRILL_PHYSICAL, collation, DrillDistributionTrait.SINGLETON),
+          exch,
+          aggregate.getContext(),
+          OperatorPhase.PHASE_2of2);
+    }
   }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataHandlerPrule.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataHandlerPrule.java
index 8424469..bd8beab 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataHandlerPrule.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataHandlerPrule.java
@@ -36,7 +36,7 @@ public class MetadataHandlerPrule extends Prule {
   public void onMatch(RelOptRuleCall call) {
     MetadataHandlerRel relNode = call.rel(0);
     RelNode input = relNode.getInput();
-    RelTraitSet traits = input.getTraitSet().plus(Prel.DRILL_PHYSICAL).plus(DrillDistributionTrait.DEFAULT);
+    RelTraitSet traits = input.getTraitSet().plus(Prel.DRILL_PHYSICAL).plus(DrillDistributionTrait.SINGLETON);
     RelNode convertedInput = convert(input, traits);
     call.transformTo(new MetadataHandlerPrel(relNode.getCluster(), traits,
         convertedInput,
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrel.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataHashAggPrel.java
similarity index 78%
copy from exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrel.java
copy to exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataHashAggPrel.java
index a50f1a8..8f454f2 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrel.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataHashAggPrel.java
@@ -21,11 +21,11 @@ import org.apache.calcite.plan.RelOptCluster;
 import org.apache.calcite.plan.RelTraitSet;
 import org.apache.calcite.rel.RelNode;
 import org.apache.calcite.rel.SingleRel;
+import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
 import org.apache.drill.exec.physical.base.PhysicalOperator;
-import org.apache.drill.exec.physical.config.MetadataAggPOP;
+import org.apache.drill.exec.physical.config.MetadataHashAggPOP;
 import org.apache.drill.exec.planner.common.DrillRelNode;
 import org.apache.drill.exec.planner.physical.visitor.PrelVisitor;
-import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
 import org.apache.drill.exec.record.BatchSchema;
 import org.apache.drill.shaded.guava.com.google.common.base.Preconditions;
 
@@ -33,19 +33,21 @@ import java.io.IOException;
 import java.util.Iterator;
 import java.util.List;
 
-public class MetadataAggPrel extends SingleRel implements DrillRelNode, Prel {
+public class MetadataHashAggPrel extends SingleRel implements DrillRelNode, Prel {
   private final MetadataAggregateContext context;
+  private final AggPrelBase.OperatorPhase phase;
 
-  public MetadataAggPrel(RelOptCluster cluster, RelTraitSet traits, RelNode input,
-      MetadataAggregateContext context) {
+  public MetadataHashAggPrel(RelOptCluster cluster, RelTraitSet traits, RelNode input,
+      MetadataAggregateContext context, AggPrelBase.OperatorPhase phase) {
     super(cluster, traits, input);
     this.context = context;
+    this.phase = phase;
   }
 
   @Override
   public PhysicalOperator getPhysicalOperator(PhysicalPlanCreator creator) throws IOException {
     Prel child = (Prel) this.getInput();
-    MetadataAggPOP physicalOperator = new MetadataAggPOP(child.getPhysicalOperator(creator), context);
+    MetadataHashAggPOP physicalOperator = new MetadataHashAggPOP(child.getPhysicalOperator(creator), context, phase);
     return creator.addMetadata(this, physicalOperator);
   }
 
@@ -66,7 +68,7 @@ public class MetadataAggPrel extends SingleRel implements DrillRelNode, Prel {
 
   @Override
   public boolean needsFinalColumnReordering() {
-    return true;
+    return false;
   }
 
   @Override
@@ -77,6 +79,10 @@ public class MetadataAggPrel extends SingleRel implements DrillRelNode, Prel {
   @Override
   public RelNode copy(RelTraitSet traitSet, List<RelNode> inputs) {
     Preconditions.checkState(inputs.size() == 1);
-    return new MetadataAggPrel(getCluster(), traitSet, inputs.iterator().next(), context);
+    return new MetadataHashAggPrel(getCluster(), traitSet, inputs.iterator().next(), context, phase);
+  }
+
+  public AggPrelBase.OperatorPhase getPhase() {
+    return phase;
   }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrel.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataStreamAggPrel.java
similarity index 77%
rename from exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrel.java
rename to exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataStreamAggPrel.java
index a50f1a8..5da8917 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataAggPrel.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/MetadataStreamAggPrel.java
@@ -22,8 +22,9 @@ import org.apache.calcite.plan.RelTraitSet;
 import org.apache.calcite.rel.RelNode;
 import org.apache.calcite.rel.SingleRel;
 import org.apache.drill.exec.physical.base.PhysicalOperator;
-import org.apache.drill.exec.physical.config.MetadataAggPOP;
+import org.apache.drill.exec.physical.config.MetadataStreamAggPOP;
 import org.apache.drill.exec.planner.common.DrillRelNode;
+import org.apache.drill.exec.planner.physical.AggPrelBase.OperatorPhase;
 import org.apache.drill.exec.planner.physical.visitor.PrelVisitor;
 import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
 import org.apache.drill.exec.record.BatchSchema;
@@ -33,19 +34,21 @@ import java.io.IOException;
 import java.util.Iterator;
 import java.util.List;
 
-public class MetadataAggPrel extends SingleRel implements DrillRelNode, Prel {
+public class MetadataStreamAggPrel extends SingleRel implements DrillRelNode, Prel {
   private final MetadataAggregateContext context;
+  private final OperatorPhase phase;
 
-  public MetadataAggPrel(RelOptCluster cluster, RelTraitSet traits, RelNode input,
-      MetadataAggregateContext context) {
+  public MetadataStreamAggPrel(RelOptCluster cluster, RelTraitSet traits, RelNode input,
+      MetadataAggregateContext context, OperatorPhase phase) {
     super(cluster, traits, input);
     this.context = context;
+    this.phase = phase;
   }
 
   @Override
   public PhysicalOperator getPhysicalOperator(PhysicalPlanCreator creator) throws IOException {
     Prel child = (Prel) this.getInput();
-    MetadataAggPOP physicalOperator = new MetadataAggPOP(child.getPhysicalOperator(creator), context);
+    MetadataStreamAggPOP physicalOperator = new MetadataStreamAggPOP(child.getPhysicalOperator(creator), context, phase);
     return creator.addMetadata(this, physicalOperator);
   }
 
@@ -66,7 +69,7 @@ public class MetadataAggPrel extends SingleRel implements DrillRelNode, Prel {
 
   @Override
   public boolean needsFinalColumnReordering() {
-    return true;
+    return false;
   }
 
   @Override
@@ -77,6 +80,10 @@ public class MetadataAggPrel extends SingleRel implements DrillRelNode, Prel {
   @Override
   public RelNode copy(RelTraitSet traitSet, List<RelNode> inputs) {
     Preconditions.checkState(inputs.size() == 1);
-    return new MetadataAggPrel(getCluster(), traitSet, inputs.iterator().next(), context);
+    return new MetadataStreamAggPrel(getCluster(), traitSet, inputs.iterator().next(), context, phase);
+  }
+
+  public OperatorPhase getPhase() {
+    return phase;
   }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/PrelUtil.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/PrelUtil.java
index e6cba6b..4639bdd 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/PrelUtil.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/physical/PrelUtil.java
@@ -17,8 +17,6 @@
  */
 package org.apache.drill.exec.planner.physical;
 
-import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
-
 import org.apache.calcite.plan.RelOptCluster;
 import org.apache.calcite.plan.RelOptPlanner;
 import org.apache.calcite.plan.RelOptRuleCall;
@@ -37,22 +35,33 @@ import org.apache.drill.common.expression.FieldReference;
 import org.apache.drill.common.logical.data.Order.Ordering;
 import org.apache.drill.exec.record.BatchSchema.SelectionVectorMode;
 
+import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Iterator;
 import java.util.List;
+import java.util.function.Function;
 
 public class PrelUtil {
 
   public static List<Ordering> getOrdering(RelCollation collation, RelDataType rowType) {
-    List<Ordering> orderExpr = Lists.newArrayList();
-
-    final List<String> childFields = rowType.getFieldNames();
-
-    for (RelFieldCollation fc : collation.getFieldCollations()) {
-      FieldReference fr = new FieldReference(childFields.get(fc.getFieldIndex()), ExpressionPosition.UNKNOWN);
-      orderExpr.add(new Ordering(fc.getDirection(), fr, fc.nullDirection));
+    List<Ordering> orderExpr = new ArrayList<>();
+
+    List<String> childFields = rowType.getFieldNames();
+    Function<RelFieldCollation, String> fieldNameProvider;
+    if (collation instanceof MetadataAggPrule.NamedRelCollation) {
+      fieldNameProvider = fieldCollation -> {
+        MetadataAggPrule.NamedRelCollation namedCollation = (MetadataAggPrule.NamedRelCollation) collation;
+        return namedCollation.getName(fieldCollation.getFieldIndex());
+      };
+    } else {
+      fieldNameProvider = fieldCollation -> childFields.get(fieldCollation.getFieldIndex());
     }
 
+    collation.getFieldCollations().forEach(fieldCollation -> {
+      FieldReference fieldReference = new FieldReference(fieldNameProvider.apply(fieldCollation), ExpressionPosition.UNKNOWN);
+      orderExpr.add(new Ordering(fieldCollation.getDirection(), fieldReference, fieldCollation.nullDirection));
+    });
+
     return orderExpr;
   }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/handlers/MetastoreAnalyzeTableHandler.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/handlers/MetastoreAnalyzeTableHandler.java
index c34ee99..36eae41 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/handlers/MetastoreAnalyzeTableHandler.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/handlers/MetastoreAnalyzeTableHandler.java
@@ -37,6 +37,7 @@ import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.logical.data.NamedExpression;
 import org.apache.drill.common.util.function.CheckedSupplier;
 import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.metastore.analyze.AnalyzeInfoProvider;
 import org.apache.drill.exec.metastore.analyze.MetadataAggregateContext;
 import org.apache.drill.exec.metastore.analyze.MetadataControllerContext;
@@ -116,12 +117,14 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
           .build(logger);
     }
 
+    ColumnNamesOptions columnNamesOptions = new ColumnNamesOptions(context.getOptions());
+
     SqlIdentifier tableIdentifier = sqlAnalyzeTable.getTableIdentifier();
     // creates select with DYNAMIC_STAR column and analyze specific columns to obtain corresponding table scan
     SqlSelect scanSql = new SqlSelect(
         SqlParserPos.ZERO,
         SqlNodeList.EMPTY,
-        getColumnList(sqlAnalyzeTable, analyzeInfoProvider),
+        getColumnList(analyzeInfoProvider.getProjectionFields(table, getMetadataType(sqlAnalyzeTable), columnNamesOptions)),
         tableIdentifier,
         null,
         null,
@@ -175,13 +178,12 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
   /**
    * Generates the column list with {@link SchemaPath#DYNAMIC_STAR} and columns required for analyze.
    */
-  private SqlNodeList getColumnList(SqlMetastoreAnalyzeTable sqlAnalyzeTable, AnalyzeInfoProvider analyzeInfoProvider) {
+  private SqlNodeList getColumnList(List<SchemaPath> projectingColumns) {
     SqlNodeList columnList = new SqlNodeList(SqlParserPos.ZERO);
     columnList.add(new SqlIdentifier(SchemaPath.DYNAMIC_STAR, SqlParserPos.ZERO));
-    MetadataType metadataLevel = getMetadataType(sqlAnalyzeTable);
-    for (SqlIdentifier field : analyzeInfoProvider.getProjectionFields(metadataLevel, context.getPlannerSettings().getOptions())) {
-      columnList.add(field);
-    }
+    projectingColumns.stream()
+        .map(segmentColumn -> new SqlIdentifier(segmentColumn.getRootSegmentPath(), SqlParserPos.ZERO))
+        .forEach(columnList::add);
     return columnList;
   }
 
@@ -219,7 +221,9 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
         .workspace(workspaceName)
         .build();
 
-    List<String> segmentColumns = analyzeInfoProvider.getSegmentColumns(table, context.getPlannerSettings().getOptions()).stream()
+    ColumnNamesOptions columnNamesOptions = new ColumnNamesOptions(context.getOptions());
+
+    List<String> segmentColumns = analyzeInfoProvider.getSegmentColumns(table, columnNamesOptions).stream()
         .map(SchemaPath::getRootSegmentPath)
         .collect(Collectors.toList());
     List<NamedExpression> segmentExpressions = segmentColumns.stream()
@@ -298,7 +302,7 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
           .forEach(statisticsColumns::add);
     }
 
-    SchemaPath locationField = analyzeInfoProvider.getLocationField(config.getContext().getOptions());
+    SchemaPath locationField = analyzeInfoProvider.getLocationField(columnNamesOptions);
 
     if (analyzeInfoProvider.supportsMetadataType(MetadataType.ROW_GROUP) && metadataLevel.includes(MetadataType.ROW_GROUP)) {
       MetadataHandlerContext handlerContext = MetadataHandlerContext.builder()
@@ -344,7 +348,7 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
             .build();
 
         convertedRelNode = getSegmentAggRelNode(segmentExpressions, convertedRelNode,
-            createNewAggregations, statisticsColumns, locationField, analyzeInfoProvider, i, handlerContext);
+            createNewAggregations, statisticsColumns, locationField, i, handlerContext);
 
         locationField = SchemaPath.getSimplePath(MetastoreAnalyzeConstants.LOCATION_FIELD);
 
@@ -409,6 +413,7 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
         .interestingColumns(statisticsColumns)
         .createNewAggregations(createNewAggregations)
         .excludedColumns(excludedColumns)
+        .metadataLevel(MetadataType.TABLE)
         .build();
 
     convertedRelNode = new MetadataAggRel(convertedRelNode.getCluster(),
@@ -424,22 +429,20 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
 
   private DrillRel getSegmentAggRelNode(List<NamedExpression> segmentExpressions, DrillRel convertedRelNode,
       boolean createNewAggregations, List<SchemaPath> statisticsColumns, SchemaPath locationField,
-      AnalyzeInfoProvider analyzeInfoProvider, int segmentLevel, MetadataHandlerContext handlerContext) {
-      SchemaPath lastModifiedTimeField =
+      int segmentLevel, MetadataHandlerContext handlerContext) {
+    SchemaPath lastModifiedTimeField =
         SchemaPath.getSimplePath(config.getContext().getOptions().getString(ExecConstants.IMPLICIT_LAST_MODIFIED_TIME_COLUMN_LABEL));
 
     List<SchemaPath> excludedColumns = Arrays.asList(lastModifiedTimeField, locationField);
 
-    List<NamedExpression> groupByExpressions = new ArrayList<>();
-    groupByExpressions.add(analyzeInfoProvider.getParentLocationExpression(locationField));
-
-    groupByExpressions.addAll(segmentExpressions);
+    List<NamedExpression> groupByExpressions = new ArrayList<>(segmentExpressions);
 
     MetadataAggregateContext aggregateContext = MetadataAggregateContext.builder()
-        .groupByExpressions(groupByExpressions.subList(0, segmentLevel + 1))
+        .groupByExpressions(groupByExpressions.subList(0, segmentLevel))
         .interestingColumns(statisticsColumns)
         .createNewAggregations(createNewAggregations)
         .excludedColumns(excludedColumns)
+        .metadataLevel(MetadataType.SEGMENT)
         .build();
 
     convertedRelNode = new MetadataAggRel(convertedRelNode.getCluster(),
@@ -470,6 +473,7 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
         .interestingColumns(statisticsColumns)
         .createNewAggregations(createNewAggregations)
         .excludedColumns(excludedColumns)
+        .metadataLevel(MetadataType.FILE)
         .build();
 
     convertedRelNode = new MetadataAggRel(convertedRelNode.getCluster(),
@@ -508,6 +512,7 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
         .interestingColumns(statisticsColumns)
         .createNewAggregations(createNewAggregations)
         .excludedColumns(excludedColumns)
+        .metadataLevel(MetadataType.ROW_GROUP)
         .build();
 
     convertedRelNode = new MetadataAggRel(convertedRelNode.getCluster(),
@@ -525,11 +530,10 @@ public class MetastoreAnalyzeTableHandler extends DefaultSqlHandler {
       SchemaPath locationField, String rowGroupIndexColumn, SchemaPath rgiField) {
     List<NamedExpression> rowGroupGroupByExpressions = new ArrayList<>(segmentExpressions);
     rowGroupGroupByExpressions.add(
+        new NamedExpression(locationField, FieldReference.getWithQuotedRef(MetastoreAnalyzeConstants.LOCATION_FIELD)));
+    rowGroupGroupByExpressions.add(
         new NamedExpression(rgiField,
             FieldReference.getWithQuotedRef(rowGroupIndexColumn)));
-
-    rowGroupGroupByExpressions.add(
-        new NamedExpression(locationField, FieldReference.getWithQuotedRef(MetastoreAnalyzeConstants.LOCATION_FIELD)));
     return rowGroupGroupByExpressions;
   }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/ColumnExplorer.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/ColumnExplorer.java
index 2b2dd74..d4c45ae 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/ColumnExplorer.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/ColumnExplorer.java
@@ -31,6 +31,7 @@ import org.apache.drill.common.exceptions.DrillRuntimeException;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.map.CaseInsensitiveMap;
 import org.apache.drill.exec.ExecConstants;
+import org.apache.drill.exec.metastore.ColumnNamesOptions;
 import org.apache.drill.exec.server.options.OptionManager;
 import org.apache.drill.exec.server.options.OptionValue;
 import org.apache.drill.exec.store.dfs.FileSelection;
@@ -156,7 +157,7 @@ public class ColumnExplorer {
    * @param path column path
    * @return true if given column is partition, false otherwise
    */
-  public static boolean isPartitionColumn(String partitionDesignator, String path){
+  public static boolean isPartitionColumn(String partitionDesignator, String path) {
     Pattern pattern = Pattern.compile(String.format("%s[0-9]+", partitionDesignator));
     Matcher matcher = pattern.matcher(path);
     return matcher.matches();
@@ -164,7 +165,18 @@ public class ColumnExplorer {
 
   public boolean isImplicitColumn(String name) {
     return isPartitionColumn(partitionDesignator, name) ||
-           isImplicitFileColumn(name);
+        isImplicitOrInternalFileColumn(name);
+  }
+
+  /**
+   * Checks whether given column is implicit or internal.
+   *
+   * @param name name of the column to check
+   * @return {@code true} if given column is implicit or internal, {@code false} otherwise
+   */
+  public boolean isImplicitOrInternalFileColumn(String name) {
+    return allImplicitColumns.get(name) != null
+        || allInternalColumns.get(name) != null;
   }
 
   public boolean isImplicitFileColumn(String name) {
@@ -191,14 +203,13 @@ public class ColumnExplorer {
    * Returns list with partition column names.
    * For the case when table has several levels of nesting, max level is chosen.
    *
-   * @param selection     the source of file paths
-   * @param optionManager the source of session option value for partition column label
+   * @param selection          the source of file paths
+   * @param columnNamesOptions the source of session option value for partition column label
    * @return list with partition column names.
    */
-  public static List<String> getPartitionColumnNames(FileSelection selection, OptionManager optionManager) {
+  public static List<String> getPartitionColumnNames(FileSelection selection, ColumnNamesOptions columnNamesOptions) {
 
-    String partitionColumnLabel = optionManager.getString(
-        ExecConstants.FILESYSTEM_PARTITION_COLUMN_LABEL);
+    String partitionColumnLabel = columnNamesOptions.partitionColumnNameLabel();
 
     return getPartitionColumnNames(selection, partitionColumnLabel);
   }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/AbstractParquetGroupScan.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/AbstractParquetGroupScan.java
index 50112ab..a9bff98 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/AbstractParquetGroupScan.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/AbstractParquetGroupScan.java
@@ -23,6 +23,7 @@ import java.util.Collection;
 import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
+import java.util.LinkedHashMap;
 import java.util.List;
 import java.util.Map;
 import java.util.Objects;
@@ -31,6 +32,7 @@ import java.util.function.Function;
 import java.util.stream.Collectors;
 
 import org.apache.commons.collections.CollectionUtils;
+import org.apache.commons.collections.MapUtils;
 import org.apache.drill.common.expression.ExpressionStringBuilder;
 import org.apache.drill.common.expression.LogicalExpression;
 import org.apache.drill.common.expression.SchemaPath;
@@ -72,7 +74,7 @@ import com.fasterxml.jackson.annotation.JsonIgnore;
 import com.fasterxml.jackson.annotation.JsonInclude;
 import com.fasterxml.jackson.annotation.JsonProperty;
 
-public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMetadata {
+public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMetadata<ParquetMetadataProvider> {
 
   private static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger(AbstractParquetGroupScan.class);
 
@@ -140,6 +142,9 @@ public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMeta
    */
   @Override
   public List<EndpointAffinity> getOperatorAffinity() {
+    if (endpointAffinities == null) {
+      this.endpointAffinities = AffinityCreator.getAffinityMap(getRowGroupInfos());
+    }
     return endpointAffinities;
   }
 
@@ -369,11 +374,11 @@ public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMeta
 
     return getFilterer()
         .rowGroups(prunedRowGroups)
-        .table(getTableMetadata())
-        .partitions(getPartitionsMetadata())
-        .segments(getSegmentsMetadata())
+        .table(tableMetadata)
+        .partitions(partitions)
+        .segments(segments)
         .files(qualifiedFiles)
-        .nonInterestingColumns(getNonInterestingColumnsMetadata())
+        .nonInterestingColumns(nonInterestingColumnsMetadata)
         .matching(matchAllMetadata)
         .build();
   }
@@ -433,17 +438,9 @@ public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMeta
   }
 
   // protected methods block
-  @Override
-  protected void init() throws IOException {
-    super.init();
-
-    this.partitionColumns = metadataProvider.getPartitionColumns();
-    this.endpointAffinities = AffinityCreator.getAffinityMap(getRowGroupInfos());
-  }
-
   protected Multimap<Path, RowGroupMetadata> getRowGroupsMetadata() {
     if (rowGroups == null) {
-      rowGroups = ((ParquetMetadataProvider) metadataProvider).getRowGroupsMetadataMap();
+      rowGroups = metadataProvider.getRowGroupsMetadataMap();
     }
     return rowGroups;
   }
@@ -534,8 +531,6 @@ public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMeta
         newScan.fileSet = new HashSet<>(newScan.getRowGroupsMetadata().keySet());
       }
 
-      newScan.endpointAffinities = AffinityCreator.getAffinityMap(newScan.getRowGroupInfos());
-
       return newScan;
     }
 
@@ -593,6 +588,16 @@ public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMeta
 
         this.rowGroups = LinkedListMultimap.create();
         filteredRowGroups.forEach(entry -> this.rowGroups.put(entry.getPath(), entry));
+        // updates files list to include only present row groups
+        if (MapUtils.isNotEmpty(files)) {
+          files = rowGroups.keySet().stream()
+              .map(files::get)
+              .collect(Collectors.toMap(
+                  FileMetadata::getPath,
+                  Function.identity(),
+                  (o, n) -> n,
+                  LinkedHashMap::new));
+        }
       } else {
         this.rowGroups = prunedRowGroups;
         matchAllMetadata = false;
@@ -601,6 +606,67 @@ public abstract class AbstractParquetGroupScan extends AbstractGroupScanWithMeta
     }
 
     /**
+     * Produces filtering of metadata at file level.
+     *
+     * @param optionManager     option manager
+     * @param filterPredicate   filter expression
+     * @param schemaPathsInExpr columns used in filter expression
+     */
+    @Override
+    protected void filterFileMetadata(OptionManager optionManager,
+        FilterPredicate<?> filterPredicate,
+        Set<SchemaPath> schemaPathsInExpr) {
+      Map<Path, FileMetadata> prunedFiles;
+      if (!source.getPartitionsMetadata().isEmpty()
+          && source.getPartitionsMetadata().size() > getPartitions().size()) {
+        // prunes files to leave only files which are contained by pruned partitions
+        prunedFiles = pruneForPartitions(source.getFilesMetadata(), getPartitions());
+      } else if (!source.getSegmentsMetadata().isEmpty()
+          && source.getSegmentsMetadata().size() > getSegments().size()) {
+        // prunes row groups to leave only row groups which are contained by pruned segments
+        prunedFiles = pruneForSegments(source.getFilesMetadata(), getSegments());
+      } else {
+        prunedFiles = source.getFilesMetadata();
+      }
+
+      if (isMatchAllMetadata()) {
+        files = prunedFiles;
+        return;
+      }
+
+      // files which have only single row group may be pruned when pruning row groups
+      Map<Path, FileMetadata> omittedFiles = new HashMap<>();
+
+      AbstractParquetGroupScan abstractParquetGroupScan = (AbstractParquetGroupScan) source;
+
+      Map<Path, FileMetadata> filesToFilter = new HashMap<>(prunedFiles);
+      if (!abstractParquetGroupScan.rowGroups.isEmpty()) {
+        prunedFiles.forEach((path, fileMetadata) -> {
+          if (abstractParquetGroupScan.rowGroups.get(path).size() == 1) {
+            omittedFiles.put(path, fileMetadata);
+            filesToFilter.remove(path);
+          }
+        });
+      }
+
+      // Stop files pruning for the case:
+      //    -  # of files is beyond PARQUET_ROWGROUP_FILTER_PUSHDOWN_PLANNING_THRESHOLD.
+      if (filesToFilter.size() <= optionManager.getOption(
+          PlannerSettings.PARQUET_ROWGROUP_FILTER_PUSHDOWN_PLANNING_THRESHOLD)) {
+
+        matchAllMetadata = true;
+        files = filterAndGetMetadata(schemaPathsInExpr, filesToFilter.values(), filterPredicate, optionManager).stream()
+            .collect(Collectors.toMap(FileMetadata::getPath, Function.identity()));
+
+        files.putAll(omittedFiles);
+      } else {
+        matchAllMetadata = false;
+        files = prunedFiles;
+        overflowLevel = MetadataType.FILE;
+      }
+    }
+
+    /**
      * Removes metadata which does not belong to any of segments in metadata list.
      *
      * @param metadataToPrune         list of metadata which should be pruned
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetFileTableMetadataProviderBuilder.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetFileTableMetadataProviderBuilder.java
index 4df69d1..cc93873 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetFileTableMetadataProviderBuilder.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetFileTableMetadataProviderBuilder.java
@@ -18,6 +18,7 @@
 package org.apache.drill.exec.store.parquet;
 
 import org.apache.drill.exec.metastore.ParquetTableMetadataProvider;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
 import org.apache.drill.metastore.metadata.TableMetadataProviderBuilder;
 import org.apache.drill.exec.store.dfs.DrillFileSystem;
 import org.apache.drill.exec.store.dfs.FileSelection;
@@ -32,6 +33,9 @@ import java.util.List;
  */
 public interface ParquetFileTableMetadataProviderBuilder extends TableMetadataProviderBuilder {
 
+  @Override
+  ParquetFileTableMetadataProviderBuilder withSchema(TupleMetadata schema);
+
   ParquetFileTableMetadataProviderBuilder withEntries(List<ReadEntryWithPath> entries);
 
   ParquetFileTableMetadataProviderBuilder withSelectionRoot(Path selectionRoot);
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/pojo/DynamicPojoRecordReader.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/pojo/DynamicPojoRecordReader.java
index 6765a20..a9ee538 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/pojo/DynamicPojoRecordReader.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/pojo/DynamicPojoRecordReader.java
@@ -29,7 +29,6 @@ import com.fasterxml.jackson.databind.ObjectMapper;
 import com.fasterxml.jackson.databind.util.StdConverter;
 
 import java.util.ArrayList;
-import java.util.Collections;
 import java.util.Iterator;
 import java.util.LinkedHashMap;
 import java.util.List;
@@ -93,7 +92,7 @@ public class DynamicPojoRecordReader<T> extends AbstractPojoRecordReader<List<T>
    * An utility class that converts from {@link com.fasterxml.jackson.databind.JsonNode}
    * to DynamicPojoRecordReader during physical plan fragment deserialization.
    */
-  public static class Converter extends StdConverter<JsonNode, DynamicPojoRecordReader> {
+  public static class Converter<T> extends StdConverter<JsonNode, DynamicPojoRecordReader<T>> {
     private static final TypeReference<LinkedHashMap<String, Class<?>>> schemaType =
         new TypeReference<LinkedHashMap<String, Class<?>>>() {};
 
@@ -105,16 +104,22 @@ public class DynamicPojoRecordReader<T> extends AbstractPojoRecordReader<List<T>
     }
 
     @Override
-    public DynamicPojoRecordReader convert(JsonNode value) {
-      LinkedHashMap<String, Class<?>> schema = mapper.convertValue(value.get("schema"), schemaType);
-
-      ArrayList records = new ArrayList(schema.size());
-      final Iterator<JsonNode> recordsIterator = value.get("records").get(0).elements();
-      for (Class<?> fieldType : schema.values()) {
-        records.add(mapper.convertValue(recordsIterator.next(), fieldType));
+    public DynamicPojoRecordReader<T> convert(JsonNode value) {
+      LinkedHashMap<String, Class<T>> schema = mapper.convertValue(value.get("schema"), schemaType);
+      List<List<T>> records = new ArrayList<>();
+
+      JsonNode serializedRecords = value.get("records");
+      for (JsonNode serializedRecord : serializedRecords) {
+        List<T> record = new ArrayList<>(schema.size());
+        Iterator<JsonNode> recordsIterator = serializedRecord.elements();
+        for (Class<T> fieldType : schema.values()) {
+          record.add(mapper.convertValue(recordsIterator.next(), fieldType));
+        }
+        records.add(record);
       }
+
       int maxRecordsToRead = value.get("recordsPerBatch").asInt();
-      return new DynamicPojoRecordReader(schema, Collections.singletonList(records), maxRecordsToRead);
+      return new DynamicPojoRecordReader(schema, records, maxRecordsToRead);
     }
   }
 }
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunction.java b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunction.java
index 38b0ae5..6b703c7 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunction.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunction.java
@@ -85,7 +85,7 @@ public class TestAggregateFunction extends PopUnitTestBase {
   public void testCovarianceCorrelation() throws Throwable {
     String planPath = "/functions/test_covariance.json";
     String dataPath = "/covariance_input.json";
-    Double expectedValues[] = {4.571428571428571d, 4.857142857142857d, -6.000000000000002d, 4.0d, 4.25d, -5.250000000000002d, 1.0d, 0.9274260335029677d, -1.0000000000000004d};
+    Double[] expectedValues = {4.571428571428571d, 4.857142857142857d, -6.000000000000002d, 4.0d, 4.25d, -5.250000000000002d, 1.0d, 0.9274260335029677d, -1.0000000000000004d};
 
     runTest(expectedValues, planPath, dataPath);
   }
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java
index f79757d..fcfd3fe 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/fn/impl/TestAggregateFunctions.java
@@ -1056,66 +1056,172 @@ public class TestAggregateFunctions extends ClusterTest {
   }
 
   @Test
-  public void testCollectList() throws Exception {
-    testBuilder()
-        .sqlQuery("select collect_list('n_nationkey', n_nationkey, " +
-            "'n_name', n_name, 'n_regionkey', n_regionkey, 'n_comment', n_comment) as l from (select * from cp.`tpch/nation.parquet` limit 2)")
-        .unOrdered()
-        .baselineColumns("l")
-        .baselineValues(listOf(
-            mapOf("n_nationkey", 0, "n_name", "ALGERIA",
-                "n_regionkey", 0, "n_comment", " haggle. carefully final deposits detect slyly agai"),
-            mapOf("n_nationkey", 1, "n_name", "ARGENTINA", "n_regionkey", 1,
-                "n_comment", "al foxes promise slyly according to the regular accounts. bold requests alon")))
-        .go();
+  public void testCollectListStreamAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.HASHAGG.getOptionName(), false);
+      testBuilder()
+          .sqlQuery("select collect_list('n_nationkey', n_nationkey, " +
+              "'n_name', n_name, 'n_regionkey', n_regionkey, 'n_comment', n_comment) as l " +
+              "from (select * from cp.`tpch/nation.parquet` limit 2)")
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(listOf(
+              mapOf("n_nationkey", 0, "n_name", "ALGERIA",
+                  "n_regionkey", 0, "n_comment", " haggle. carefully final deposits detect slyly agai"),
+              mapOf("n_nationkey", 1, "n_name", "ARGENTINA", "n_regionkey", 1,
+                  "n_comment", "al foxes promise slyly according to the regular accounts. bold requests alon")))
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.HASHAGG.getOptionName());
+    }
   }
 
   @Test
-  public void testCollectToListVarchar() throws Exception {
-    testBuilder()
-        .sqlQuery("select collect_to_list_varchar(`date`) as l from " +
-            "(select * from cp.`store/json/clicks.json` limit 2)")
-        .unOrdered()
-        .baselineColumns("l")
-        .baselineValues(listOf("2014-04-26", "2014-04-20"))
-        .go();
+  public void testCollectListHashAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.STREAMAGG.getOptionName(), false);
+      testBuilder()
+          .sqlQuery("select collect_list('n_nationkey', n_nationkey, " +
+              "'n_name', n_name, 'n_regionkey', n_regionkey, 'n_comment', n_comment) as l " +
+              "from (select * from cp.`tpch/nation.parquet` limit 2) group by 'a'")
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(listOf(
+              mapOf("n_nationkey", 0, "n_name", "ALGERIA",
+                  "n_regionkey", 0, "n_comment", " haggle. carefully final deposits detect slyly agai"),
+              mapOf("n_nationkey", 1, "n_name", "ARGENTINA", "n_regionkey", 1,
+                  "n_comment", "al foxes promise slyly according to the regular accounts. bold requests alon")))
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.STREAMAGG.getOptionName());
+    }
   }
 
   @Test
-  public void testSchemaFunction() throws Exception {
-    TupleMetadata schema = new SchemaBuilder()
-        .add("n_nationkey", TypeProtos.MinorType.INT)
-        .add("n_name", TypeProtos.MinorType.VARCHAR)
-        .add("n_regionkey", TypeProtos.MinorType.INT)
-        .add("n_comment", TypeProtos.MinorType.VARCHAR)
-        .build();
+  public void testCollectToListVarcharStreamAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.HASHAGG.getOptionName(), false);
+      testBuilder()
+          .sqlQuery("select collect_to_list_varchar(`date`) as l from " +
+              "(select * from cp.`store/json/clicks.json` limit 2)")
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(listOf("2014-04-26", "2014-04-20"))
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.HASHAGG.getOptionName());
+    }
+  }
 
-    testBuilder()
-        .sqlQuery("select schema('n_nationkey', n_nationkey, " +
-            "'n_name', n_name, 'n_regionkey', n_regionkey, 'n_comment', n_comment) as l from " +
-            "(select * from cp.`tpch/nation.parquet` limit 2)")
-        .unOrdered()
-        .baselineColumns("l")
-        .baselineValues(schema.jsonString())
-        .go();
+  @Test
+  public void testCollectToListVarcharHashAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.STREAMAGG.getOptionName(), false);
+      testBuilder()
+          .sqlQuery("select collect_to_list_varchar(`date`) as l from " +
+              "(select * from cp.`store/json/clicks.json` limit 2) group by 'a'")
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(listOf("2014-04-26", "2014-04-20"))
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.STREAMAGG.getOptionName());
+    }
   }
 
   @Test
-  public void testMergeSchemaFunction() throws Exception {
-    String schema = new SchemaBuilder()
-        .add("n_nationkey", TypeProtos.MinorType.INT)
-        .add("n_name", TypeProtos.MinorType.VARCHAR)
-        .add("n_regionkey", TypeProtos.MinorType.INT)
-        .add("n_comment", TypeProtos.MinorType.VARCHAR)
-        .build()
-        .jsonString();
+  public void testSchemaFunctionStreamAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.HASHAGG.getOptionName(), false);
+      TupleMetadata schema = new SchemaBuilder()
+          .add("n_nationkey", TypeProtos.MinorType.INT)
+          .add("n_name", TypeProtos.MinorType.VARCHAR)
+          .add("n_regionkey", TypeProtos.MinorType.INT)
+          .add("n_comment", TypeProtos.MinorType.VARCHAR)
+          .build();
 
-    testBuilder()
-        .sqlQuery("select merge_schema('%s') as l from " +
-            "(select * from cp.`tpch/nation.parquet` limit 2)", schema)
-        .unOrdered()
-        .baselineColumns("l")
-        .baselineValues(schema)
-        .go();
+      testBuilder()
+          .sqlQuery("select schema('n_nationkey', n_nationkey, " +
+              "'n_name', n_name, 'n_regionkey', n_regionkey, 'n_comment', n_comment) as l from " +
+              "(select * from cp.`tpch/nation.parquet` limit 2)")
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(schema.jsonString())
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.HASHAGG.getOptionName());
+    }
+  }
+
+  @Test
+  public void testSchemaFunctionHashAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.STREAMAGG.getOptionName(), false);
+      TupleMetadata schema = new SchemaBuilder()
+          .add("n_nationkey", TypeProtos.MinorType.INT)
+          .add("n_name", TypeProtos.MinorType.VARCHAR)
+          .add("n_regionkey", TypeProtos.MinorType.INT)
+          .add("n_comment", TypeProtos.MinorType.VARCHAR)
+          .build();
+
+      testBuilder()
+          .sqlQuery("select schema('n_nationkey', n_nationkey, " +
+              "'n_name', n_name, 'n_regionkey', n_regionkey, 'n_comment', n_comment) as l from " +
+              "(select * from cp.`tpch/nation.parquet` limit 2) group by 'a'")
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(schema.jsonString())
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.STREAMAGG.getOptionName());
+    }
+  }
+
+  @Test
+  public void testMergeSchemaFunctionStreamAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.HASHAGG.getOptionName(), false);
+      String schema = new SchemaBuilder()
+          .add("n_nationkey", TypeProtos.MinorType.INT)
+          .add("n_name", TypeProtos.MinorType.VARCHAR)
+          .add("n_regionkey", TypeProtos.MinorType.INT)
+          .add("n_comment", TypeProtos.MinorType.VARCHAR)
+          .build()
+          .jsonString();
+
+      testBuilder()
+          .sqlQuery("select merge_schema('%s') as l from " +
+              "(select * from cp.`tpch/nation.parquet` limit 2)", schema)
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(schema)
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.HASHAGG.getOptionName());
+    }
+  }
+
+  @Test
+  public void testMergeSchemaFunctionHashAgg() throws Exception {
+    try {
+      client.alterSession(PlannerSettings.STREAMAGG.getOptionName(), false);
+      String schema = new SchemaBuilder()
+          .add("n_nationkey", TypeProtos.MinorType.INT)
+          .add("n_name", TypeProtos.MinorType.VARCHAR)
+          .add("n_regionkey", TypeProtos.MinorType.INT)
+          .add("n_comment", TypeProtos.MinorType.VARCHAR)
+          .build()
+          .jsonString();
+
+      testBuilder()
+          .sqlQuery("select merge_schema('%s') as l from " +
+              "(select * from cp.`tpch/nation.parquet` limit 2) group by 'a'", schema)
+          .unOrdered()
+          .baselineColumns("l")
+          .baselineValues(schema)
+          .go();
+    } finally {
+      client.resetSession(PlannerSettings.STREAMAGG.getOptionName());
+    }
   }
 }
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestAggWithAnyValue.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestAggWithAnyValue.java
index 850b49d..a1cd2e1 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestAggWithAnyValue.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestAggWithAnyValue.java
@@ -18,22 +18,29 @@
 
 package org.apache.drill.exec.physical.impl.agg;
 
-import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
+import org.apache.drill.exec.physical.config.HashAggregate;
+import org.apache.drill.exec.planner.physical.AggPrelBase.OperatorPhase;
+import org.apache.drill.exec.planner.physical.PlannerSettings;
 import org.apache.drill.exec.physical.config.StreamingAggregate;
+import org.apache.drill.test.ClusterFixture;
+import org.apache.drill.test.ClusterTest;
 import org.apache.drill.test.PhysicalOpUnitTestBase;
 import org.apache.drill.exec.util.JsonStringArrayList;
-import org.apache.drill.test.BaseTestQuery;
 import org.apache.drill.categories.OperatorTest;
+import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
-import org.apache.drill.test.TestBuilder;
 import org.junit.experimental.runners.Enclosed;
 import org.junit.runner.RunWith;
 
 import java.math.BigDecimal;
+import java.util.Arrays;
 import java.util.List;
 
+import static org.apache.drill.test.TestBuilder.listOf;
+import static org.apache.drill.test.TestBuilder.mapOf;
+
 @Category(OperatorTest.class)
 @RunWith(Enclosed.class)
 public class TestAggWithAnyValue {
@@ -43,7 +50,7 @@ public class TestAggWithAnyValue {
     @Test
     public void testStreamAggWithGroupBy() {
       StreamingAggregate aggConf = new StreamingAggregate(null, parseExprs("age.`max`", "age"), parseExprs("any_value(a)", "any_a"));
-      List<String> inputJsonBatches = Lists.newArrayList(
+      List<String> inputJsonBatches = Arrays.asList(
           "[{ \"age\": {\"min\":20, \"max\":60}, \"city\": \"San Bruno\", \"de\": \"987654321987654321987654321.10987654321\"," +
               " \"a\": [{\"b\":50, \"c\":30},{\"b\":70, \"c\":40}], \"m\": [{\"n\": [10, 11, 12]}], \"f\": [{\"g\": {\"h\": [{\"k\": 70}, {\"k\": 80}]}}]," +
               "\"p\": {\"q\": [21, 22, 23]}" + "}, " +
@@ -63,87 +70,258 @@ public class TestAggWithAnyValue {
           .physicalOperator(aggConf)
           .inputDataStreamJson(inputJsonBatches)
           .baselineColumns("age", "any_a")
-          .baselineValues(60l, TestBuilder.listOf(TestBuilder.mapOf("b", 50l, "c", 30l), TestBuilder.mapOf("b", 70l, "c", 40l)))
-          .baselineValues(80l, TestBuilder.listOf(TestBuilder.mapOf("b", 10l, "c", 15l), TestBuilder.mapOf("b", 20l, "c", 45l)))
+          .baselineValues(60L,
+              listOf(
+                  mapOf("b", 50L, "c", 30L),
+                  mapOf("b", 70L, "c", 40L)))
+          .baselineValues(80L,
+              listOf(
+                  mapOf("b", 10L, "c", 15L),
+                  mapOf("b", 20L, "c", 45L)))
+          .go();
+    }
+
+    @Test
+    public void testHashAggWithGroupBy() {
+      HashAggregate aggConf = new HashAggregate(null,
+          OperatorPhase.PHASE_1of1,
+          parseExprs("age.`max`", "age"),
+          parseExprs("any_value(a)", "any_a"),
+          1F);
+      List<String> inputJsonBatches = Arrays.asList(
+          "[{ \"age\": {\"min\":20, \"max\":60}, \"city\": \"San Bruno\", \"de\": \"987654321987654321987654321.10987654321\"," +
+              " \"a\": [{\"b\":50, \"c\":30},{\"b\":70, \"c\":40}], \"m\": [{\"n\": [10, 11, 12]}], \"f\": [{\"g\": {\"h\": [{\"k\": 70}, {\"k\": 80}]}}]," +
+              "\"p\": {\"q\": [21, 22, 23]}}, " +
+              "{ \"age\": {\"min\":20, \"max\":60}, \"city\": \"Castro Valley\", \"de\": \"987654321987654321987654321.12987654321\"," +
+              " \"a\": [{\"b\":60, \"c\":40},{\"b\":80, \"c\":50}], \"m\": [{\"n\": [13, 14, 15]}], \"f\": [{\"g\": {\"h\": [{\"k\": 90}, {\"k\": 100}]}}]," +
+              "\"p\": {\"q\": [24, 25, 26]}}]",
+          "[{ \"age\": {\"min\":43, \"max\":80}, \"city\": \"Palo Alto\", \"de\": \"987654321987654321987654321.00987654321\"," +
+              " \"a\": [{\"b\":10, \"c\":15}, {\"b\":20, \"c\":45}], \"m\": [{\"n\": [1, 2, 3]}], \"f\": [{\"g\": {\"h\": [{\"k\": 10}, {\"k\": 20}]}}]," +
+              "\"p\": {\"q\": [27, 28, 29]}}, " +
+              "{ \"age\": {\"min\":43, \"max\":80}, \"city\": \"San Carlos\", \"de\": \"987654321987654321987654321.11987654321\"," +
+              " \"a\": [{\"b\":30, \"c\":25}, {\"b\":40, \"c\":55}], \"m\": [{\"n\": [4, 5, 6]}], \"f\": [{\"g\": {\"h\": [{\"k\": 30}, {\"k\": 40}]}}]," +
+              "\"p\": {\"q\": [30, 31, 32]}}, " +
+              "{ \"age\": {\"min\":43, \"max\":80}, \"city\": \"Palo Alto\", \"de\": \"987654321987654321987654321.13987654321\"," +
+              " \"a\": [{\"b\":70, \"c\":85}, {\"b\":90, \"c\":145}], \"m\": [{\"n\": [7, 8, 9]}], \"f\": [{\"g\": {\"h\": [{\"k\": 50}, {\"k\": 60}]}}]," +
+              "\"p\": {\"q\": [33, 34, 35]}}]");
+      legacyOpTestBuilder()
+          .physicalOperator(aggConf)
+          .inputDataStreamJson(inputJsonBatches)
+          .baselineColumns("age", "any_a")
+          .baselineValues(60L,
+              listOf(
+                  mapOf("b", 50L, "c", 30L),
+                  mapOf("b", 70L, "c", 40L)))
+          .baselineValues(80L,
+              listOf(
+                  mapOf("b", 10L, "c", 15L),
+                  mapOf("b", 20L, "c", 45L)))
           .go();
     }
   }
 
-  public static class TestAggWithAnyValueSingleBatch extends BaseTestQuery {
+  public static class TestAggWithAnyValueSingleBatch extends ClusterTest {
+
+    @BeforeClass
+    public static void setUp() throws Exception {
+      startCluster(ClusterFixture.builder(dirTestWatcher));
+    }
 
     @Test
-    public void testWithGroupBy() throws Exception {
-      String query = "select t1.age.`max` as age, count(*) as cnt, any_value(t1.a) as any_a, any_value(t1.city) as any_city, " +
-          "any_value(f) as any_f, any_value(m) as any_m, any_value(p) as any_p from  cp.`store/json/test_anyvalue.json` t1 group by t1.age.`max`";
-      testBuilder()
-          .sqlQuery(query)
-          .unOrdered()
-          .baselineColumns("age", "cnt", "any_a", "any_city", "any_f", "any_m", "any_p")
-          .baselineValues(60l, 2l, TestBuilder.listOf(TestBuilder.mapOf("b", 50l, "c", 30l), TestBuilder.mapOf("b", 70l, "c", 40l)), "San Bruno",
-              TestBuilder.listOf(TestBuilder.mapOf("g", TestBuilder.mapOf("h", TestBuilder.listOf(TestBuilder.mapOf("k", 70l), TestBuilder.mapOf("k", 80l))))),
-              TestBuilder.listOf(TestBuilder.mapOf("n", TestBuilder.listOf(10l, 11l, 12l))),
-              TestBuilder.mapOf("q", TestBuilder.listOf(21l, 22l, 23l)))
-          .baselineValues(80l, 3l, TestBuilder.listOf(TestBuilder.mapOf("b", 10l, "c", 15l), TestBuilder.mapOf("b", 20l, "c", 45l)), "Palo Alto",
-              TestBuilder.listOf(TestBuilder.mapOf("g", TestBuilder.mapOf("h", TestBuilder.listOf(TestBuilder.mapOf("k", 10l), TestBuilder.mapOf("k", 20l))))),
-              TestBuilder.listOf(TestBuilder.mapOf("n", TestBuilder.listOf(1l, 2l, 3l))),
-              TestBuilder.mapOf("q", TestBuilder.listOf(27l, 28l, 29l)))
-          .go();
+    public void testWithGroupByStreamAgg() throws Exception {
+      String query = "select t1.age.`max` as age, count(*) as cnt, any_value(t1.a) as any_a," +
+          "any_value(t1.city) as any_city, any_value(f) as any_f, any_value(m) as any_m," +
+          "any_value(p) as any_p from  cp.`store/json/test_anyvalue.json` t1 group by t1.age.`max`";
+
+      try {
+        client.alterSession(PlannerSettings.HASHAGG.getOptionName(), false);
+        testBuilder()
+            .sqlQuery(query)
+            .unOrdered()
+            .baselineColumns("age", "cnt", "any_a", "any_city", "any_f", "any_m", "any_p")
+            .baselineValues(60L, 2L,
+                listOf(
+                    mapOf("b", 50L, "c", 30L),
+                    mapOf("b", 70L, "c", 40L)),
+                "San Bruno",
+                listOf(
+                    mapOf("g",
+                        mapOf("h",
+                            listOf(mapOf("k", 70L), mapOf("k", 80L))))),
+                listOf(mapOf("n", listOf(10L, 11L, 12L))),
+                mapOf("q", listOf(21L, 22L, 23L)))
+            .baselineValues(80L, 3L,
+                listOf(
+                    mapOf("b", 10L, "c", 15L),
+                    mapOf("b", 20L, "c", 45L)),
+                "Palo Alto",
+                listOf(mapOf("g",
+                    mapOf("h", listOf(mapOf("k", 10L), mapOf("k", 20L))))),
+                listOf(mapOf("n", listOf(1L, 2L, 3L))),
+                mapOf("q", listOf(27L, 28L, 29L)))
+            .go();
+      } finally {
+        client.resetSession(PlannerSettings.HASHAGG.getOptionName());
+      }
+    }
+
+    @Test
+    public void testWithGroupByHashAgg() throws Exception {
+      String query = "select t1.age.`max` as age, count(*) as cnt, any_value(t1.a) as any_a," +
+          "any_value(t1.city) as any_city, any_value(f) as any_f, any_value(m) as any_m," +
+          "any_value(p) as any_p from  cp.`store/json/test_anyvalue.json` t1 group by t1.age.`max`";
+      try {
+        client.alterSession(PlannerSettings.STREAMAGG.getOptionName(), false);
+        testBuilder()
+            .sqlQuery(query)
+            .unOrdered()
+            .baselineColumns("age", "cnt", "any_a", "any_city", "any_f", "any_m", "any_p")
+            .baselineValues(60L, 2L,
+                listOf(
+                    mapOf("b", 50L, "c", 30L),
+                    mapOf("b", 70L, "c", 40L)),
+                "San Bruno",
+                listOf(
+                    mapOf("g",
+                        mapOf("h",
+                            listOf(mapOf("k", 70L), mapOf("k", 80L))))),
+                listOf(mapOf("n", listOf(10L, 11L, 12L))),
+                mapOf("q", listOf(21L, 22L, 23L)))
+            .baselineValues(80L, 3L,
+                listOf(
+                    mapOf("b", 10L, "c", 15L),
+                    mapOf("b", 20L, "c", 45L)),
+                "Palo Alto",
+                listOf(mapOf("g",
+                    mapOf("h", listOf(mapOf("k", 10L), mapOf("k", 20L))))),
+                listOf(mapOf("n", listOf(1L, 2L, 3L))),
+                mapOf("q", listOf(27L, 28L, 29L)))
+            .go();
+      } finally {
+        client.resetSession(PlannerSettings.STREAMAGG.getOptionName());
+      }
     }
 
     @Test
     public void testWithoutGroupBy() throws Exception {
       String query = "select count(*) as cnt, any_value(t1.a) as any_a, any_value(t1.city) as any_city, " +
-          "any_value(f) as any_f, any_value(m) as any_m, any_value(p) as any_p from  cp.`store/json/test_anyvalue.json` t1";
+          "any_value(f) as any_f, any_value(m) as any_m, any_value(p) as any_p " +
+          "from cp.`store/json/test_anyvalue.json` t1";
       testBuilder()
           .sqlQuery(query)
           .unOrdered()
           .baselineColumns("cnt", "any_a", "any_city", "any_f", "any_m", "any_p")
-          .baselineValues(5l, TestBuilder.listOf(TestBuilder.mapOf("b", 10l, "c", 15l), TestBuilder.mapOf("b", 20l, "c", 45l)), "Palo Alto",
-              TestBuilder.listOf(TestBuilder.mapOf("g", TestBuilder.mapOf("h", TestBuilder.listOf(TestBuilder.mapOf("k", 10l), TestBuilder.mapOf("k", 20l))))),
-              TestBuilder.listOf(TestBuilder.mapOf("n", TestBuilder.listOf(1l, 2l, 3l))),
-              TestBuilder.mapOf("q", TestBuilder.listOf(27l, 28l, 29l)))
+          .baselineValues(5L,
+              listOf(
+                  mapOf("b", 10L, "c", 15L),
+                  mapOf("b", 20L, "c", 45L)),
+              "Palo Alto",
+              listOf(mapOf("g", mapOf("h", listOf(mapOf("k", 10L), mapOf("k", 20L))))),
+              listOf(mapOf("n", listOf(1L, 2L, 3L))),
+              mapOf("q", listOf(27L, 28L, 29L)))
           .go();
     }
 
     @Test
-    public void testDecimalWithGroupBy() throws Exception {
-      String query = "select t1.age.`max` as age, any_value(cast(t1.de as decimal(38, 11))) as any_decimal " +
-          "from  cp.`store/json/test_anyvalue.json` t1 group by t1.age.`max`";
-      testBuilder()
-          .sqlQuery(query)
-          .unOrdered()
-          .baselineColumns("age", "any_decimal")
-          .baselineValues(60l, new BigDecimal("987654321987654321987654321.10987654321"))
-          .baselineValues(80l, new BigDecimal("987654321987654321987654321.00987654321"))
-          .go();
+    public void testDecimalWithGroupByStreamAgg() throws Exception {
+      try {
+        client.alterSession(PlannerSettings.HASHAGG.getOptionName(), false);
+        String query = "select t1.age.`max` as age, any_value(cast(t1.de as decimal(38, 11))) as any_decimal " +
+            "from cp.`store/json/test_anyvalue.json` t1 group by t1.age.`max`";
+        testBuilder()
+            .sqlQuery(query)
+            .unOrdered()
+            .baselineColumns("age", "any_decimal")
+            .baselineValues(60L, new BigDecimal("987654321987654321987654321.10987654321"))
+            .baselineValues(80L, new BigDecimal("987654321987654321987654321.00987654321"))
+            .go();
+      } finally {
+        client.resetSession(PlannerSettings.HASHAGG.getOptionName());
+      }
     }
 
     @Test
-    public void testRepeatedDecimalWithGroupBy() throws Exception {
-      JsonStringArrayList<BigDecimal> ints = new JsonStringArrayList<>();
-      ints.add(new BigDecimal("999999.999"));
-      ints.add(new BigDecimal("-999999.999"));
-      ints.add(new BigDecimal("0.000"));
-
-      JsonStringArrayList<BigDecimal> longs = new JsonStringArrayList<>();
-      longs.add(new BigDecimal("999999999.999999999"));
-      longs.add(new BigDecimal("-999999999.999999999"));
-      longs.add(new BigDecimal("0.000000000"));
-
-      JsonStringArrayList<BigDecimal> fixedLen = new JsonStringArrayList<>();
-      fixedLen.add(new BigDecimal("999999999999.999999"));
-      fixedLen.add(new BigDecimal("-999999999999.999999"));
-      fixedLen.add(new BigDecimal("0.000000"));
-
-      String query = "select any_value(decimal_int32) as any_dec_32, any_value(decimal_int64) as any_dec_64," +
-          " any_value(decimal_fixedLen) as any_dec_fixed, any_value(decimal_binary) as any_dec_bin" +
-          " from cp.`parquet/repeatedIntLondFixedLenBinaryDecimal.parquet`";
-      testBuilder()
-          .sqlQuery(query)
-          .unOrdered()
-          .baselineColumns("any_dec_32", "any_dec_64", "any_dec_fixed", "any_dec_bin")
-          .baselineValues(ints, longs, fixedLen, fixedLen)
-          .go();
+    public void testDecimalWithGroupByHashAgg() throws Exception {
+      try {
+        client.alterSession(PlannerSettings.STREAMAGG.getOptionName(), false);
+        String query = "select t1.age.`max` as age, any_value(cast(t1.de as decimal(38, 11))) as any_decimal " +
+            "from cp.`store/json/test_anyvalue.json` t1 group by t1.age.`max`";
+        testBuilder()
+            .sqlQuery(query)
+            .unOrdered()
+            .baselineColumns("age", "any_decimal")
+            .baselineValues(60L, new BigDecimal("987654321987654321987654321.10987654321"))
+            .baselineValues(80L, new BigDecimal("987654321987654321987654321.00987654321"))
+            .go();
+      } finally {
+        client.resetSession(PlannerSettings.STREAMAGG.getOptionName());
+      }
+    }
+
+    @Test
+    public void testRepeatedDecimalWithGroupByStreamAgg() throws Exception {
+      try {
+        client.alterSession(PlannerSettings.HASHAGG.getOptionName(), false);
+        JsonStringArrayList<BigDecimal> ints = new JsonStringArrayList<>();
+        ints.add(new BigDecimal("999999.999"));
+        ints.add(new BigDecimal("-999999.999"));
+        ints.add(new BigDecimal("0.000"));
+
+        JsonStringArrayList<BigDecimal> longs = new JsonStringArrayList<>();
+        longs.add(new BigDecimal("999999999.999999999"));
+        longs.add(new BigDecimal("-999999999.999999999"));
+        longs.add(new BigDecimal("0.000000000"));
+
+        JsonStringArrayList<BigDecimal> fixedLen = new JsonStringArrayList<>();
+        fixedLen.add(new BigDecimal("999999999999.999999"));
+        fixedLen.add(new BigDecimal("-999999999999.999999"));
+        fixedLen.add(new BigDecimal("0.000000"));
+
+        String query = "select any_value(decimal_int32) as any_dec_32, any_value(decimal_int64) as any_dec_64," +
+            " any_value(decimal_fixedLen) as any_dec_fixed, any_value(decimal_binary) as any_dec_bin" +
+            " from cp.`parquet/repeatedIntLondFixedLenBinaryDecimal.parquet` group by 'a'";
+        testBuilder()
+            .sqlQuery(query)
+            .unOrdered()
+            .baselineColumns("any_dec_32", "any_dec_64", "any_dec_fixed", "any_dec_bin")
+            .baselineValues(ints, longs, fixedLen, fixedLen)
+            .go();
+      } finally {
+        client.resetSession(PlannerSettings.HASHAGG.getOptionName());
+      }
+    }
+
+    @Test
+    public void testRepeatedDecimalWithGroupByHashAgg() throws Exception {
+      try {
+        client.alterSession(PlannerSettings.STREAMAGG.getOptionName(), false);
+        JsonStringArrayList<BigDecimal> ints = new JsonStringArrayList<>();
+        ints.add(new BigDecimal("999999.999"));
+        ints.add(new BigDecimal("-999999.999"));
+        ints.add(new BigDecimal("0.000"));
+
+        JsonStringArrayList<BigDecimal> longs = new JsonStringArrayList<>();
+        longs.add(new BigDecimal("999999999.999999999"));
+        longs.add(new BigDecimal("-999999999.999999999"));
+        longs.add(new BigDecimal("0.000000000"));
+
+        JsonStringArrayList<BigDecimal> fixedLen = new JsonStringArrayList<>();
+        fixedLen.add(new BigDecimal("999999999999.999999"));
+        fixedLen.add(new BigDecimal("-999999999999.999999"));
+        fixedLen.add(new BigDecimal("0.000000"));
+
+        String query = "select any_value(decimal_int32) as any_dec_32, any_value(decimal_int64) as any_dec_64," +
+            " any_value(decimal_fixedLen) as any_dec_fixed, any_value(decimal_binary) as any_dec_bin" +
+            " from cp.`parquet/repeatedIntLondFixedLenBinaryDecimal.parquet` group by 'a'";
+        testBuilder()
+            .sqlQuery(query)
+            .unOrdered()
+            .baselineColumns("any_dec_32", "any_dec_64", "any_dec_fixed", "any_dec_bin")
+            .baselineValues(ints, longs, fixedLen, fixedLen)
+            .go();
+      } finally {
+        client.resetSession(PlannerSettings.STREAMAGG.getOptionName());
+      }
     }
   }
 }
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestHashAggEmitOutcome.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestHashAggEmitOutcome.java
index fa60562..910039f 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestHashAggEmitOutcome.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/agg/TestHashAggEmitOutcome.java
@@ -42,7 +42,8 @@ import static org.apache.drill.exec.record.RecordBatch.IterOutcome.NONE;
 import static org.apache.drill.exec.record.RecordBatch.IterOutcome.OK;
 import static org.apache.drill.exec.record.RecordBatch.IterOutcome.OK_NEW_SCHEMA;
 import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.fail;
 
 @Category(OperatorTest.class)
 public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
@@ -73,11 +74,11 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    * @param outputRowCounts - expected number of rows, in each output batch
    * @param outputOutcomes - the expected output outcomes
    */
-  private void testHashAggrEmit(int inp2_1[], int inp2_2[], String inp2_3[],  // first input batch
-                                int inp3_1[], int inp3_2[], String inp3_3[],  // second input batch
-                                String exp1_1[], int exp1_2[],            // first expected
-                                String exp2_1[], int exp2_2[],            // second expected
-                                int inpRowSet[], RecordBatch.IterOutcome inpOutcomes[],  // input batches + outcomes
+  private void testHashAggrEmit(int[] inp2_1, int[] inp2_2, String[] inp2_3,  // first input batch
+                                int[] inp3_1, int[] inp3_2, String[] inp3_3,  // second input batch
+                                String[] exp1_1, int[] exp1_2,            // first expected
+                                String[] exp2_1, int[] exp2_2,            // second expected
+                                int[] inpRowSet, RecordBatch.IterOutcome[] inpOutcomes,  // input batches + outcomes
                                 List<Integer> outputRowCounts,  // output row counts per each out batch
                                 List<RecordBatch.IterOutcome> outputOutcomes) // output outcomes
   {
@@ -134,12 +135,11 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
         case 3: inputContainer.add(nonEmptyInputRowSet3.container());
           break;
         default:
-          assertTrue(false);
+          fail();
       }
     }
-    for (RecordBatch.IterOutcome out : inpOutcomes) {  // build the outcomes
-      inputOutcomes.add(out);
-    }
+    // build the outcomes
+    inputOutcomes.addAll(Arrays.asList(inpOutcomes));
 
     //
     //  Build the Hash Agg Batch operator
@@ -158,20 +158,21 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
     //
     //  Iterate thru the next batches, and verify expected outcomes
     //
-    assertTrue( outputRowCounts.size() == outputOutcomes.size());
+    assertEquals(outputRowCounts.size(), outputOutcomes.size());
     boolean firstOne = true;
 
-    for (int ind = 0; ind < outputOutcomes.size(); ind++ ) {
+    for (int ind = 0; ind < outputOutcomes.size(); ind++) {
       RecordBatch.IterOutcome expOut = outputOutcomes.get(ind);
-      assertTrue(haBatch.next() == expOut );
-      if ( expOut == NONE ) { break; } // done
+      assertSame(expOut, haBatch.next());
+      if (expOut == NONE) {
+        break;
+      } // done
       RowSet actualRowSet = DirectRowSet.fromContainer(haBatch.getContainer());
       int expectedSize = outputRowCounts.get(ind);
       // System.out.println(expectedSize);
-      if ( 0 == expectedSize ) {
+      if (0 == expectedSize) {
         assertEquals(expectedSize, haBatch.getRecordCount());
-      }
-      else if ( firstOne ) {
+      } else if (firstOne) {
         firstOne = false;
         new RowSetComparison(expectedRowSet1).verify(actualRowSet);
       } else {
@@ -197,8 +198,8 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrWithEmptyDataSet() {
-    int inpRowSet[] = {0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA};
+    int[] inpRowSet = {0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, NONE);
@@ -213,8 +214,8 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrEmptyBatchEmitOutcome() {
-    int inpRowSet[] = {0, 0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, EMIT};
+    int[] inpRowSet = {0, 0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 0, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, NONE);
@@ -230,15 +231,15 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrNonEmptyBatchEmitOutcome() {
-    int inp2_1[]         = { 2,       2,       13,       13,      4};
-    int inp2_2[]         = {20,      20,      130,      130,     40};
-    String inp2_3[] = {"item2", "item2", "item13", "item13", "item4"};
+    int[] inp2_1 = {2, 2, 13, 13, 4};
+    int[] inp2_2 = {20, 20, 130, 130, 40};
+    String[] inp2_3 = {"item2", "item2", "item13", "item13", "item4"};
 
-    String exp1_1[] = {"item2", "item13", "item4"};
-    int exp1_2[]    = {     44,      286,     44};
+    String[] exp1_1 = {"item2", "item13", "item4"};
+    int[] exp1_2 = {44, 286, 44};
 
-    int inpRowSet[] = {0, 2};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, EMIT};
+    int[] inpRowSet = {0, 2};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 3, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, NONE);
@@ -252,15 +253,15 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrEmptyBatchFollowedByNonEmptyBatchEmitOutcome() {
-    int inp2_1[]          = {2,       13,       4,       0,        0,     0};
-    int inp2_2[]         = {20,      130,      40,    2000,     1300,  4000};
-    String inp2_3[] = {"item2", "item13", "item4", "item2", "item13", "item4"};
+    int[] inp2_1 = {2, 13, 4, 0, 0, 0};
+    int[] inp2_2 = {20, 130, 40, 2000, 1300, 4000};
+    String[] inp2_3 = {"item2", "item13", "item4", "item2", "item13", "item4"};
 
-    String exp1_1[] = {"item2", "item13", "item4"};
-    int exp1_2[]       = {2022,     1443,   4044};
+    String[] exp1_1 = {"item2", "item13", "item4"};
+    int[] exp1_2 = {2022, 1443, 4044};
 
-    int inpRowSet[] = {0, 0, 2};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, EMIT, EMIT};
+    int[] inpRowSet = {0, 0, 2};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, EMIT, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 0, 3, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, EMIT, NONE);
@@ -274,15 +275,15 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrMultipleEmptyBatchFollowedByNonEmptyBatchEmitOutcome() {
-    int inp2_1[]          = {2,       13,       4,       0,       1,        0,    1};
-    int inp2_2[]         = {20,      130,      40,       0,   11000,        0,   33000};
-    String inp2_3[] = {"item2", "item13", "item4", "item2", "item2", "item13", "item13"};
+    int[] inp2_1 = {2, 13, 4, 0, 1, 0, 1};
+    int[] inp2_2 = {20, 130, 40, 0, 11000, 0, 33000};
+    String[] inp2_3 = {"item2", "item13", "item4", "item2", "item2", "item13", "item13"};
 
-    String exp1_1[] = {"item2", "item13", "item4"};
-    int exp1_2[]      = {11023,    33144,     44};
+    String[] exp1_1 = {"item2", "item13", "item4"};
+    int[] exp1_2 = {11023, 33144, 44};
 
-    int inpRowSet[] = {0, 0, 0, 0, 2};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, EMIT, EMIT, EMIT, EMIT};
+    int[] inpRowSet = {0, 0, 0, 0, 2};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, EMIT, EMIT, EMIT, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 0, 0, 0, 3, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, EMIT, EMIT, EMIT, NONE);
@@ -300,18 +301,18 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAgrResetsAfterFirstEmitOutcome() {
-    int inp2_1[]          = {2,       3,       3,       3,       3,   3,   3,   3,   3,   3,   3, 2 };
-    int inp2_2[]         = {20,      30,      30,      30,      30,  30,  30,  30,  30,  30,  30, 20  };
-    String inp2_3[] = {"item2", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item2"};
+    int[] inp2_1 = {2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2};
+    int[] inp2_2 = {20, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 20};
+    String[] inp2_3 = {"item2", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item3", "item2"};
 
-    String exp1_1[] = {"item1"};
-    int exp1_2[]      = {11};
+    String[] exp1_1 = {"item1"};
+    int[] exp1_2 = {11};
 
-    String exp2_1[] = {"item2", "item3"};
-    int exp2_2[]        = {44,      330};
+    String[] exp2_1 = {"item2", "item3"};
+    int[] exp2_2 = {44, 330};
 
-    int inpRowSet[] = {1, 0, 2, 0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, EMIT, OK, EMIT};
+    int[] inpRowSet = {1, 0, 2, 0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, EMIT, OK, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 1, 2, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, EMIT, NONE);
@@ -327,11 +328,11 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
   @Test
   public void testHashAggr_NonEmptyFirst_EmptyOKEmitOutcome() {
 
-    String exp1_1[] = {"item1"};
-    int exp1_2[]      = {11};
+    String[] exp1_1 = {"item1"};
+    int[] exp1_2 = {11};
 
-    int inpRowSet[] = {1, 0, 0, 0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, OK, EMIT, NONE};
+    int[] inpRowSet = {1, 0, 0, 0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, OK, EMIT, NONE};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 1, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, NONE);
@@ -349,18 +350,18 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrMultipleOutputBatch() {
-    int inp2_1[]          = {4,       2,       5,       3,       5,      4};
-    int inp2_2[]         = {40,      20,      50,      30,      50,     40};
-    String inp2_3[] = {"item4", "item2", "item5", "item3", "item5", "item4"};
+    int[] inp2_1 = {4, 2, 5, 3, 5, 4};
+    int[] inp2_2 = {40, 20, 50, 30, 50, 40};
+    String[] inp2_3 = {"item4", "item2", "item5", "item3", "item5", "item4"};
 
-    String exp1_1[] = {"item1"};
-    int exp1_2[]      = {11};
+    String[] exp1_1 = {"item1"};
+    int[] exp1_2 = {11};
 
-    String exp2_1[] = {"item4", "item2", "item5", "item3"};
-    int exp2_2[]      = {   88,      22,     110,     33};
+    String[] exp2_1 = {"item4", "item2", "item5", "item3"};
+    int[] exp2_2 = {88, 22, 110, 33};
 
-    int inpRowSet[] = {1, 0, 2};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, EMIT, OK};
+    int[] inpRowSet = {1, 0, 2};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, EMIT, OK};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 1, 4, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, OK, NONE);
@@ -374,18 +375,18 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrMultipleEMITOutcome() {
-    int inp2_1[]          = {2,       3};
-    int inp2_2[]         = {20,      30};
-    String inp2_3[] = {"item2", "item3"};
+    int[] inp2_1 = {2, 3};
+    int[] inp2_2 = {20, 30};
+    String[] inp2_3 = {"item2", "item3"};
 
-    String exp1_1[] = {"item1"};
-    int exp1_2[]      = {11};
+    String[] exp1_1 = {"item1"};
+    int[] exp1_2 = {11};
 
-    String exp2_1[] = {"item2", "item3"};
-    int exp2_2[]      = {   22,      33};
+    String[] exp2_1 = {"item2", "item3"};
+    int[] exp2_2 = {22, 33};
 
-    int inpRowSet[] = {1, 0, 2, 0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, EMIT, EMIT, EMIT};
+    int[] inpRowSet = {1, 0, 2, 0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, EMIT, EMIT, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 1, 2, 0, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, EMIT, EMIT, NONE);
@@ -399,15 +400,15 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrMultipleInputToSingleOutputBatch() {
-    int inp2_1[]          = {2};
-    int inp2_2[]         = {20};
-    String inp2_3[] = {"item2"};
+    int[] inp2_1 = {2};
+    int[] inp2_2 = {20};
+    String[] inp2_3 = {"item2"};
 
-    String exp1_1[] = {"item1", "item2"};
-    int exp1_2[]      = {   11,    22};
+    String[] exp1_1 = {"item1", "item2"};
+    int[] exp1_2 = {   11,    22};
 
-    int inpRowSet[] = {1, 0, 2, 0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, OK, OK, EMIT};
+    int[] inpRowSet = {1, 0, 2, 0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, OK, OK, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 2, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, NONE);
@@ -421,22 +422,22 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggrMultipleInputToMultipleOutputBatch() {
-    int inp2_1[]          = {7,       2,       7,       3};
-    int inp2_2[]         = {70,      20,      70,      33};
-    String inp2_3[] = {"item7", "item1", "item7", "item3"};
+    int[] inp2_1 = {7, 2, 7, 3};
+    int[] inp2_2 = {70, 20, 70, 33};
+    String[] inp2_3 = {"item7", "item1", "item7", "item3"};
 
-    int inp3_1[]          = {17,       7,       3,       13,       9,       13};
-    int inp3_2[]         = {170,      71,      30,      130,     123,      130};
-    String inp3_3[] = {"item17", "item7", "item3", "item13", "item3", "item13"};
+    int[] inp3_1 = {17, 7, 3, 13, 9, 13};
+    int[] inp3_2 = {170, 71, 30, 130, 123, 130};
+    String[] inp3_3 = {"item17", "item7", "item3", "item13", "item3", "item13"};
 
-    String exp1_1[] = {"item1", "item7", "item3"};
-    int exp1_2[]      = {   33,     154,      36};
+    String[] exp1_1 = {"item1", "item7", "item3"};
+    int[] exp1_2 = {33, 154, 36};
 
-    String exp2_1[] = {"item17", "item7", "item3", "item13"};
-    int exp2_2[]      = {   187,      78,     165,      286};
+    String[] exp2_1 = {"item17", "item7", "item3", "item13"};
+    int[] exp2_2 = {187, 78, 165, 286};
 
-    int inpRowSet[] = {1, 0, 2, 0, 3, 0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, OK, EMIT, OK, OK, EMIT};
+    int[] inpRowSet = {1, 0, 2, 0, 3, 0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, OK, EMIT, OK, OK, EMIT};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 3, 4, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, EMIT, EMIT, NONE);
@@ -452,19 +453,19 @@ public class TestHashAggEmitOutcome extends BaseTestOpBatchEmitOutcome {
    */
   @Test
   public void testHashAggr_WithEmptyNonEmptyBatchesAndOKOutcome() {
-    int inp2_1[]    = {      2,       7,       3,       13,       13,      13};
-    int inp2_2[]    = {     20,      70,      33,      130,      130,     130};
-    String inp2_3[] = {"item1", "item7", "item3", "item13", "item13", "item13"};
+    int[] inp2_1 = {2, 7, 3, 13, 13, 13};
+    int[] inp2_2 = {20, 70, 33, 130, 130, 130};
+    String[] inp2_3 = {"item1", "item7", "item3", "item13", "item13", "item13"};
 
-    int inp3_1[]    = {     17,       23,       130,        0};
-    int inp3_2[]    = {    170,      230,      1300,        0};
-    String inp3_3[] = {"item7", "item23", "item130", "item130"};
+    int[] inp3_1 = {17, 23, 130, 0};
+    int[] inp3_2 = {170, 230, 1300, 0};
+    String[] inp3_3 = {"item7", "item23", "item130", "item130"};
 
-    String exp1_1[] = {"item1", "item7", "item3", "item13", "item23", "item130"};
-    int exp1_2[]    = {     33,     264,      36,      429,      253,      1430};
+    String[] exp1_1 = {"item1", "item7", "item3", "item13", "item23", "item130"};
+    int[] exp1_2 = {33, 264, 36, 429, 253, 1430};
 
-    int inpRowSet[] = {1, 0, 2, 0, 3, 0};
-    RecordBatch.IterOutcome inpOutcomes[] = {OK_NEW_SCHEMA, OK, OK, OK, OK, OK};
+    int[] inpRowSet = {1, 0, 2, 0, 3, 0};
+    RecordBatch.IterOutcome[] inpOutcomes = {OK_NEW_SCHEMA, OK, OK, OK, OK, OK};
 
     List<Integer> outputRowCounts = Arrays.asList(0, 6, 0);
     List<RecordBatch.IterOutcome> outputOutcomes = Arrays.asList(OK_NEW_SCHEMA, OK, NONE);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestMetastoreCommands.java b/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestMetastoreCommands.java
index e1ece37..c039d34 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestMetastoreCommands.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/sql/TestMetastoreCommands.java
@@ -224,7 +224,12 @@ public class TestMetastoreCommands extends ClusterTest {
       run("create table dfs.tmp.`%s` as\n" +
           "select * from cp.`tpch/region.parquet`", tableName);
 
-      run("analyze table dfs.tmp.`%s` columns none REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` columns none REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       String query = "select mykey from dfs.tmp.`%s` where mykey is null";
 
@@ -361,7 +366,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -500,7 +510,12 @@ public class TestMetastoreCommands extends ClusterTest {
           .build();
 
       try {
-        run("ANALYZE TABLE dfs.tmp.`%s` REFRESH METADATA '%s' level", tableName, analyzeLevel.name());
+        testBuilder()
+            .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` REFRESH METADATA '%s' level", tableName, analyzeLevel.name())
+            .unOrdered()
+            .baselineColumns("ok", "summary")
+            .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+            .go();
 
         BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
             .getMetastoreRegistry()
@@ -530,7 +545,12 @@ public class TestMetastoreCommands extends ClusterTest {
 
     for (MetadataType analyzeLevel : analyzeLevels) {
       try {
-        run("ANALYZE TABLE dfs.tmp.`%s` REFRESH METADATA '%s' level", tableName, analyzeLevel.name());
+        testBuilder()
+            .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` REFRESH METADATA '%s' level", tableName, analyzeLevel.name())
+            .unOrdered()
+            .baselineColumns("ok", "summary")
+            .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+            .go();
 
         List<String> emptyMetadataLevels = Arrays.stream(MetadataType.values())
             .filter(metadataType -> metadataType.compareTo(analyzeLevel) > 0
@@ -594,7 +614,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus) REFRESH METADATA 'row_group' LEVEL", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus) REFRESH METADATA 'row_group' LEVEL", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -639,7 +664,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.tmp.`%s` columns NONE REFRESH METADATA 'row_group' LEVEL", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` columns NONE REFRESH METADATA 'row_group' LEVEL", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -688,7 +718,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus, o_orderdate) REFRESH METADATA 'row_group' LEVEL", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus, o_orderdate) REFRESH METADATA 'row_group' LEVEL", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -753,7 +788,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus) REFRESH METADATA 'row_group' LEVEL", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus) REFRESH METADATA 'row_group' LEVEL", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -826,7 +866,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus, o_orderdate) REFRESH METADATA 'row_group' LEVEL", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` columns(o_orderstatus, o_orderdate) REFRESH METADATA 'row_group' LEVEL", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -1658,7 +1703,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.tmp.`%s` REFRESH METADATA 'file' LEVEL", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.tmp.`%s` REFRESH METADATA 'file' LEVEL", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -1795,7 +1845,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("ANALYZE TABLE dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -1883,7 +1938,12 @@ public class TestMetastoreCommands extends ClusterTest {
         .build();
 
     try {
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       BaseTableMetadata actualTableMetadata = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -1905,7 +1965,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToTestTmp(Paths.get("multilevel/parquet"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       String query =
           "select dir0, dir1, o_custkey, o_orderdate from dfs.tmp.`%s`\n" +
@@ -1936,7 +2001,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query =
           "select dir0, dir1, o_custkey, o_orderdate from dfs.`%s`\n" +
@@ -1968,20 +2038,25 @@ public class TestMetastoreCommands extends ClusterTest {
       run("create table dfs.%s (o_orderdate, o_orderpriority) partition by (o_orderpriority)\n"
           + "as select o_orderdate, o_orderpriority from dfs.`multilevel/parquet/1994/Q1`", tableName);
 
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query = "select * from dfs.%s where o_orderpriority = '1-URGENT'";
       long expectedRowCount = 3;
-      int expectedNumFiles = 1;
 
       long actualRowCount = queryBuilder().sql(query, tableName).run().recordCount();
       assertEquals(expectedRowCount, actualRowCount);
-      String numFilesPattern = "numFiles=" + expectedNumFiles;
       String usedMetaPattern = "usedMetastore=true";
 
+      // do not match expected files number since CTAS may create
+      // different files number due to small planner.slice_target value
       queryBuilder().sql(query, tableName)
           .planMatcher()
-          .include(numFilesPattern, usedMetaPattern)
+          .include(usedMetaPattern)
           .exclude("Filter")
           .match();
     } finally {
@@ -1999,20 +2074,26 @@ public class TestMetastoreCommands extends ClusterTest {
           + "as select o_orderdate, convert_to(o_orderpriority, 'UTF8') as o_orderpriority\n"
           + "from dfs.`multilevel/parquet/1994/Q1`", tableName);
 
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
+
       String query = String.format("select * from dfs.%s where o_orderpriority = '1-URGENT'", tableName);
       long expectedRowCount = 3;
-      int expectedNumFiles = 1;
 
       long actualRowCount = queryBuilder().sql(query).run().recordCount();
       assertEquals(expectedRowCount, actualRowCount);
 
-      String numFilesPattern = "numFiles=" + expectedNumFiles;
       String usedMetaPattern = "usedMetastore=true";
 
+      // do not match expected files number since CTAS may create
+      // different files number due to small planner.slice_target value
       queryBuilder().sql(query, tableName)
           .planMatcher()
-          .include(numFilesPattern, usedMetaPattern)
+          .include(usedMetaPattern)
           .exclude("Filter")
           .match();
     } finally {
@@ -2028,7 +2109,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet2"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query =
           "select dir0, dir1, o_custkey, o_orderdate from dfs.`%s`\n" +
@@ -2058,7 +2144,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet2"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query =
           "select dir0, dir1, o_custkey, o_orderdate from dfs.`%s`\n" +
@@ -2089,7 +2180,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet2"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query =
           "select dir0, dir1, o_custkey, o_orderdate from dfs.`%s`\n" +
@@ -2118,7 +2214,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet2"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query =
           "select dir0, dir1, o_custkey, o_orderdate from dfs.`%s`\n" +
@@ -2150,7 +2251,12 @@ public class TestMetastoreCommands extends ClusterTest {
       run("create table dfs.`%s` as select * from cp.`tpch/nation.parquet`", tableName);
       run("create table dfs.`%1$s/%1$s` as select * from cp.`tpch/nation.parquet`", tableName);
 
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query = "select * from  dfs.`%s`";
       long expectedRowCount = 50;
@@ -2174,7 +2280,7 @@ public class TestMetastoreCommands extends ClusterTest {
 
   @Test
   public void testFieldWithDots() throws Exception {
-    String tableName = "dfs.tmp.`complex_table`";
+    String tableName = "dfs.tmp.complex_table";
     try {
       run("create table %s as\n" +
           "select cast(1 as int) as `column.with.dots`, t.`column`.`with.dots`\n" +
@@ -2191,7 +2297,12 @@ public class TestMetastoreCommands extends ClusterTest {
           .include("usedMetastore=false")
           .match();
 
-      run("analyze table %s REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table %s REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [%s]", tableName))
+          .go();
 
       actualRowCount = queryBuilder().sql(query, tableName).run().recordCount();
 
@@ -2208,13 +2319,18 @@ public class TestMetastoreCommands extends ClusterTest {
 
   @Test
   public void testBooleanPartitionPruning() throws Exception {
-    String tableName = "dfs.tmp.`interval_bool_partition`";
+    String tableName = "dfs.tmp.interval_bool_partition";
 
     try {
       run("create table %s partition by (col_bln) as\n" +
           "select * from cp.`parquet/alltypes_required.parquet`", tableName);
 
-      run("analyze table %s REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table %s REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [%s]", tableName))
+          .go();
 
       String query = "select * from %s where col_bln = true";
       int expectedRowCount = 2;
@@ -2249,7 +2365,12 @@ public class TestMetastoreCommands extends ClusterTest {
           "union all\n" +
           "select col_notexist from cp.`tpch/region.parquet`", tableName);
 
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       String query = "select mykey from dfs.tmp.`t5` where mykey = 100";
       long actualRowCount = queryBuilder().sql(query, tableName).run().recordCount();
@@ -2278,7 +2399,12 @@ public class TestMetastoreCommands extends ClusterTest {
       run("create table dfs.tmp.`%s/b` as\n" +
           "select case when true then 100 else null end as mykey from cp.`tpch/region.parquet`", tableName);
 
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       String query = "select mykey from dfs.tmp.`%s` where mykey is null";
 
@@ -2308,7 +2434,12 @@ public class TestMetastoreCommands extends ClusterTest {
       run("create table dfs.tmp.`%s/b` as\n" +
           "select  case when true then 100 else null end as mykey from cp.`tpch/region.parquet`", tableName);
 
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       String query = "select mykey from dfs.tmp.`%s` where mykey is null";
 
@@ -2338,7 +2469,12 @@ public class TestMetastoreCommands extends ClusterTest {
       run("create table dfs.tmp.`%s/b` as\n" +
           "select case when true then 100 else null end as mykey from cp.`tpch/region.parquet`", tableName);
 
-      run("analyze table dfs.tmp.`%s` columns none REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` columns none REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       String query = "select mykey from dfs.tmp.`%s` where mykey is null";
 
@@ -2364,7 +2500,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA 'file' level", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA 'file' level", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
 
       String query = "select * from dfs.`%s`";
       long expectedRowCount = 120;
@@ -2387,13 +2528,18 @@ public class TestMetastoreCommands extends ClusterTest {
   }
 
   @Test
-  public void testAnalyzeWithFallbackError() throws Exception {
+  public void testAnalyzeWithDisabledFallback() throws Exception {
     String tableName = "parquetAnalyzeWithFallback";
 
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA 'file' level", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA 'file' level", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
       client.alterSession(ExecConstants.METASTORE_FALLBACK_TO_FILE_METADATA, false);
 
       queryBuilder()
@@ -2414,7 +2560,12 @@ public class TestMetastoreCommands extends ClusterTest {
     dirTestWatcher.copyResourceToRoot(Paths.get("multilevel/parquet"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.default.%s]", tableName))
+          .go();
       client.alterSession(ExecConstants.METASTORE_USE_SCHEMA_METADATA, false);
 
       queryBuilder()
@@ -2438,7 +2589,12 @@ public class TestMetastoreCommands extends ClusterTest {
       client.alterSession(ExecConstants.METASTORE_USE_SCHEMA_METADATA, false);
       client.alterSession(ExecConstants.STORE_TABLE_USE_SCHEMA_FILE, true);
       run("create table %s as select 'a' as c from (values(1))", table);
-      run("analyze table %s REFRESH METADATA", table);
+      testBuilder()
+          .sqlQuery("analyze table %s REFRESH METADATA", table)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [%s]", table))
+          .go();
 
       run("create schema (o_orderstatus varchar) for table %s", table);
 
@@ -2460,7 +2616,12 @@ public class TestMetastoreCommands extends ClusterTest {
 
       client.alterSession(PlannerSettings.STATISTICS_USE.getOptionName(), true);
 
-      run("analyze table %s REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table %s REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [%s]", tableName))
+          .go();
 
       String query = " select employee_id from %s where department_id = 2";
 
@@ -2487,7 +2648,12 @@ public class TestMetastoreCommands extends ClusterTest {
 
       client.alterSession(PlannerSettings.STATISTICS_USE.getOptionName(), false);
 
-      run("analyze table %s REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table %s REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [%s]", tableName))
+          .go();
 
       String query = "select employee_id from %s where department_id = 2";
 
@@ -2515,7 +2681,12 @@ public class TestMetastoreCommands extends ClusterTest {
 
       client.alterSession(PlannerSettings.STATISTICS_USE.getOptionName(), false);
 
-      run("analyze table %s REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table %s REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [%s]", tableName))
+          .go();
 
       client.alterSession(PlannerSettings.STATISTICS_USE.getOptionName(), true);
 
@@ -2546,7 +2717,12 @@ public class TestMetastoreCommands extends ClusterTest {
 
       client.alterSession(PlannerSettings.STATISTICS_USE.getOptionName(), true);
 
-      run("ANALYZE TABLE %s COLUMNS(department_id) REFRESH METADATA COMPUTE STATISTICS SAMPLE 95 PERCENT", tableName);
+      testBuilder()
+          .sqlQuery("ANALYZE TABLE %s COLUMNS(department_id) REFRESH METADATA COMPUTE STATISTICS SAMPLE 95 PERCENT", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [%s]", tableName))
+          .go();
 
       String query = "select employee_id from %s where department_id = 2";
 
@@ -2572,7 +2748,12 @@ public class TestMetastoreCommands extends ClusterTest {
       run("create table dfs.tmp.`%s` as\n" +
           "select * from cp.`tpch/region.parquet`", tableName);
 
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       MetastoreTableInfo metastoreTableInfo = cluster.drillbit().getContext()
           .getMetastoreRegistry()
@@ -2712,7 +2893,12 @@ public class TestMetastoreCommands extends ClusterTest {
     File table = dirTestWatcher.copyResourceToTestTmp(Paths.get("multilevel/parquet"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       File fileToUpdate = new File(new File(new File(table, "1994"), "Q4"), "orders_94_q4.parquet");
       long lastModified = fileToUpdate.lastModified();
@@ -2756,7 +2942,12 @@ public class TestMetastoreCommands extends ClusterTest {
     File table = dirTestWatcher.copyResourceToTestTmp(Paths.get("multilevel/parquet"), Paths.get(tableName));
 
     try {
-      run("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName);
+      testBuilder()
+          .sqlQuery("analyze table dfs.tmp.`%s` REFRESH METADATA", tableName)
+          .unOrdered()
+          .baselineColumns("ok", "summary")
+          .baselineValues(true, String.format("Collected / refreshed metadata for table [dfs.tmp.%s]", tableName))
+          .go();
 
       dirTestWatcher.copyResourceToTestTmp(
           Paths.get("multilevel", "parquet", "1994", "Q1", "orders_94_q1.parquet"),
diff --git a/exec/java-exec/src/test/resources/functions/test_covariance.json b/exec/java-exec/src/test/resources/functions/test_covariance.json
index 0d74f70..33f9567 100644
--- a/exec/java-exec/src/test/resources/functions/test_covariance.json
+++ b/exec/java-exec/src/test/resources/functions/test_covariance.json
@@ -64,10 +64,10 @@
       "ref" : "`EXPR$7`",
       "expr" : "corr(`A`, `B`) "
     }, {
-      "ref" : "`EXPR$7`",
+      "ref" : "`EXPR$8`",
       "expr" : "corr(`A`, `C`) "
     }, {
-      "ref" : "`EXPR$8`",
+      "ref" : "`EXPR$9`",
       "expr" : "corr(`C`, `D`) "
     } ]
   }, {
@@ -75,4 +75,4 @@
     "@id" : 4,
     "child" : 3
   } ]
-}
\ No newline at end of file
+}
diff --git a/exec/java-exec/src/test/resources/functions/test_logical_aggr.json b/exec/java-exec/src/test/resources/functions/test_logical_aggr.json
index f6d83b5..6e071c5 100644
--- a/exec/java-exec/src/test/resources/functions/test_logical_aggr.json
+++ b/exec/java-exec/src/test/resources/functions/test_logical_aggr.json
@@ -61,10 +61,10 @@
       "ref" : "`EXPR$5`",
       "expr" : "bit_or(`D`) "
     }, {
-      "ref" : "`EXPR$5`",
+      "ref" : "`EXPR$6`",
       "expr" : "bool_or(`C`) "
     }, {
-      "ref" : "`EXPR$6`",
+      "ref" : "`EXPR$7`",
       "expr" : "bool_and(`C`) "
     } ]
   }, {
@@ -72,4 +72,4 @@
     "@id" : 4,
     "child" : 3
   } ]
-}
\ No newline at end of file
+}
diff --git a/exec/vector/src/main/codegen/templates/ComplexWriters.java b/exec/vector/src/main/codegen/templates/ComplexWriters.java
index 3539c62..904c825 100644
--- a/exec/vector/src/main/codegen/templates/ComplexWriters.java
+++ b/exec/vector/src/main/codegen/templates/ComplexWriters.java
@@ -106,7 +106,11 @@ public class ${eName}WriterImpl extends AbstractFieldWriter {
 
   public void setPosition(int idx) {
     super.setPosition(idx);
-    mutator.startNewValue(idx);
+    // calls startNewValue only for the case
+    // when it wouldn't override existing data
+    if (idx >= vector.getAccessor().getValueCount()) {
+      mutator.startNewValue(idx);
+    }
   }
   
   <#else>
diff --git a/logical/src/main/java/org/apache/drill/common/expression/FunctionHolderExpression.java b/logical/src/main/java/org/apache/drill/common/expression/FunctionHolderExpression.java
index 8d4db48..30f6975 100644
--- a/logical/src/main/java/org/apache/drill/common/expression/FunctionHolderExpression.java
+++ b/logical/src/main/java/org/apache/drill/common/expression/FunctionHolderExpression.java
@@ -95,7 +95,7 @@ public abstract class FunctionHolderExpression extends LogicalExpressionBase {
    *
    * @param fieldReference FieldReference to set.
    */
-  public void getFieldReference(FieldReference fieldReference) {
+  public void setFieldReference(FieldReference fieldReference) {
     this.fieldReference = fieldReference;
   }
 }
diff --git a/logical/src/main/java/org/apache/drill/common/logical/data/MetadataAggregate.java b/logical/src/main/java/org/apache/drill/common/logical/data/MetadataAggregate.java
index 8d065bb..183c9b2 100644
--- a/logical/src/main/java/org/apache/drill/common/logical/data/MetadataAggregate.java
+++ b/logical/src/main/java/org/apache/drill/common/logical/data/MetadataAggregate.java
@@ -33,6 +33,6 @@ public class MetadataAggregate extends SingleInputOperator {
 
   @Override
   public <T, X, E extends Throwable> T accept(LogicalVisitor<T, X, E> logicalVisitor, X value) throws E {
-    throw new UnsupportedOperationException("MetadataController does not support visitors");
+    throw new UnsupportedOperationException("MetadataAggregate does not support visitors");
   }
 }


[drill] 02/11: DRILL-6540: Upgrade to HADOOP-3.0.3 libraries

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit f3e32c359702a6b27ca9963008d44f894f6bbb54
Author: Vitalii Diravka <vi...@gmail.com>
AuthorDate: Tue Sep 4 20:02:43 2018 +0300

    DRILL-6540: Upgrade to HADOOP-3.0.3 libraries
    
    - accomodate apache and mapr profiles with hadoop 3.0 libraries
    - update HBase version
    - fix jdbc-all woodox dependency
    - unban Apache commons-logging dependency
---
 drill-yarn/pom.xml                                 |   6 +
 exec/java-exec/pom.xml                             |  71 ++++++++
 .../impl/scan/file/FileMetadataManager.java        |   8 +-
 .../drill/exec/store/LocalSyncableFileSystem.java  |   4 +-
 .../impersonation/TestImpersonationMetadata.java   |  15 +-
 .../exec/physical/unit/TestOutputBatchSize.java    |   5 +-
 .../org/apache/drill/exec/work/batch/FileTest.java |   4 +-
 exec/jdbc-all/pom.xml                              |   5 +-
 .../org/apache/drill/jdbc/DrillbitClassLoader.java |  21 +--
 .../org/apache/drill/jdbc/ITTestShadedJar.java     |   7 +-
 pom.xml                                            | 190 ++++++++++++---------
 11 files changed, 222 insertions(+), 114 deletions(-)

diff --git a/drill-yarn/pom.xml b/drill-yarn/pom.xml
index 9de6d47..57d2073 100644
--- a/drill-yarn/pom.xml
+++ b/drill-yarn/pom.xml
@@ -83,6 +83,12 @@
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-yarn-client</artifactId>
       <scope>compile</scope>
+      <exclusions>
+        <exclusion>
+          <artifactId>slf4j-log4j12</artifactId>
+          <groupId>org.slf4j</groupId>
+        </exclusion>
+      </exclusions>
     </dependency>
 
     <!--  For ZK monitoring -->
diff --git a/exec/java-exec/pom.xml b/exec/java-exec/pom.xml
index 38a872b..176b388 100644
--- a/exec/java-exec/pom.xml
+++ b/exec/java-exec/pom.xml
@@ -396,6 +396,26 @@
             <groupId>io.netty</groupId>
             <artifactId>netty-all</artifactId>
         </exclusion>
+        <exclusion>
+          <groupId>org.eclipse.jetty</groupId>
+          <artifactId>jetty-server</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.eclipse.jetty</groupId>
+          <artifactId>jetty-servlet</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.eclipse.jetty</groupId>
+          <artifactId>jetty-servlets</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.eclipse.jetty</groupId>
+          <artifactId>jetty-security</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.eclipse.jetty</groupId>
+          <artifactId>jetty-util</artifactId>
+        </exclusion>
       </exclusions>
     </dependency>
     <dependency>
@@ -434,6 +454,57 @@
           <groupId>commons-codec</groupId>
           <artifactId>commons-codec</artifactId>
         </exclusion>
+<!---->
+        <!--<exclusion>-->
+          <!--<groupId>com.sun.jersey</groupId>-->
+          <!--<artifactId>jersey-core</artifactId>-->
+        <!--</exclusion>-->
+        <!--<exclusion>-->
+          <!--<groupId>com.sun.jersey</groupId>-->
+          <!--<artifactId>jersey-server</artifactId>-->
+        <!--</exclusion>-->
+        <!--<exclusion>-->
+          <!--<groupId>com.sun.jersey</groupId>-->
+          <!--<artifactId>jersey-json</artifactId>-->
+        <!--</exclusion>-->
+<!---->
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-hdfs</artifactId>
+      <scope>test</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>io.netty</groupId>
+          <artifactId>netty</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>io.netty</groupId>
+          <artifactId>netty-all</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>commons-codec</groupId>
+          <artifactId>commons-codec</artifactId>
+        </exclusion>
+        <!---->
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-core</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-server</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>com.sun.jersey</groupId>
+          <artifactId>jersey-json</artifactId>
+        </exclusion>
+        <!---->
+        <exclusion>
+          <groupId>log4j</groupId>
+          <artifactId>log4j</artifactId>
+        </exclusion>
       </exclusions>
     </dependency>
     <dependency>
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/scan/file/FileMetadataManager.java b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/scan/file/FileMetadataManager.java
index c8bb5ed..330a2ab 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/scan/file/FileMetadataManager.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/scan/file/FileMetadataManager.java
@@ -21,7 +21,6 @@ import java.util.ArrayList;
 import java.util.List;
 import java.util.Map;
 
-import org.apache.directory.api.util.Strings;
 import org.apache.drill.common.map.CaseInsensitiveMap;
 import org.apache.drill.exec.ExecConstants;
 import org.apache.drill.exec.physical.impl.scan.project.ColumnProjection;
@@ -37,6 +36,7 @@ import org.apache.drill.exec.record.metadata.TupleMetadata;
 import org.apache.drill.exec.server.options.OptionSet;
 import org.apache.drill.exec.store.ColumnExplorer.ImplicitFileColumns;
 import org.apache.drill.exec.vector.ValueVector;
+import org.apache.drill.shaded.guava.com.google.common.base.Strings;
 import org.apache.hadoop.fs.Path;
 
 import org.apache.drill.shaded.guava.com.google.common.annotations.VisibleForTesting;
@@ -58,7 +58,7 @@ import org.apache.drill.shaded.guava.com.google.common.annotations.VisibleForTes
  * On each file (on each reader), the columns are "resolved." Here, that means
  * that the columns are filled in with actual values based on the present file.
  * <p>
- * This is the successor to {@link ColumnExplorer}.
+ * This is the successor to {@link org.apache.drill.exec.store.ColumnExplorer}.
  */
 
 public class FileMetadataManager implements MetadataManager, ReaderProjectionResolver, VectorSource {
@@ -167,8 +167,6 @@ public class FileMetadataManager implements MetadataManager, ReaderProjectionRes
    * one file, rather than a directory
    * @param files the set of files to scan. Used to compute the maximum partition
    * depth across all readers in this fragment
-   *
-   * @return this builder
    */
 
   public FileMetadataManager(OptionSet optionManager,
@@ -178,7 +176,7 @@ public class FileMetadataManager implements MetadataManager, ReaderProjectionRes
     partitionDesignator = optionManager.getString(ExecConstants.FILESYSTEM_PARTITION_COLUMN_LABEL);
     for (ImplicitFileColumns e : ImplicitFileColumns.values()) {
       String colName = optionManager.getString(e.optionName());
-      if (! Strings.isEmpty(colName)) {
+      if (!Strings.isNullOrEmpty(colName)) {
         FileMetadataColumnDefn defn = new FileMetadataColumnDefn(colName, e);
         implicitColDefns.add(defn);
         fileMetadataColIndex.put(defn.colName, defn);
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java
index 9363954..21d66f0 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/LocalSyncableFileSystem.java
@@ -65,7 +65,7 @@ public class LocalSyncableFileSystem extends FileSystem {
 
   @Override
   public FSDataOutputStream create(Path path, FsPermission fsPermission, boolean b, int i, short i2, long l, Progressable progressable) throws IOException {
-    return new FSDataOutputStream(new LocalSyncableOutputStream(path));
+    return new FSDataOutputStream(new LocalSyncableOutputStream(path), new Statistics(path.toUri().getScheme()));
   }
 
   @Override
@@ -141,7 +141,7 @@ public class LocalSyncableFileSystem extends FileSystem {
       output = new BufferedOutputStream(fos, 64*1024);
     }
 
-    @Override
+    // TODO: remove it after upgrade MapR profile onto hadoop.version 3.1
     public void sync() throws IOException {
       output.flush();
       fos.getFD().sync();
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/impersonation/TestImpersonationMetadata.java b/exec/java-exec/src/test/java/org/apache/drill/exec/impersonation/TestImpersonationMetadata.java
index 2c6e4ee..0e9c0e0 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/impersonation/TestImpersonationMetadata.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/impersonation/TestImpersonationMetadata.java
@@ -268,17 +268,20 @@ public class TestImpersonationMetadata extends BaseTestImpersonation {
   @Test
   public void testCreateViewInWSWithNoPermissionsForQueryUser() throws Exception {
     // Workspace dir owned by "processUser", workspace group is "group0" and "user2" is not part of "group0"
-    final String viewSchema = MINI_DFS_STORAGE_PLUGIN_NAME + ".drill_test_grp_0_755";
+    final String tableWS = "drill_test_grp_0_755";
+    final String viewSchema = MINI_DFS_STORAGE_PLUGIN_NAME + "." + tableWS;
     final String viewName = "view1";
 
     updateClient(user2);
 
     test("USE " + viewSchema);
 
-    final String query = "CREATE VIEW " + viewName + " AS SELECT " +
-        "c_custkey, c_nationkey FROM cp.`tpch/customer.parquet` ORDER BY c_custkey;";
-    final String expErrorMsg = "PERMISSION ERROR: Permission denied: user=drillTestUser2, access=WRITE, inode=\"/drill_test_grp_0_755";
-    errorMsgTestHelper(query, expErrorMsg);
+    String expErrorMsg = "PERMISSION ERROR: Permission denied: user=drillTestUser2, access=WRITE, inode=\"/" + tableWS;
+    thrown.expect(UserRemoteException.class);
+    thrown.expectMessage(containsString(expErrorMsg));
+
+    test("CREATE VIEW %s AS" +
+        " SELECT c_custkey, c_nationkey FROM cp.`tpch/customer.parquet` ORDER BY c_custkey", viewName);
 
     // SHOW TABLES is expected to return no records as view creation fails above.
     testBuilder()
@@ -348,7 +351,7 @@ public class TestImpersonationMetadata extends BaseTestImpersonation {
 
     thrown.expect(UserRemoteException.class);
     thrown.expectMessage(containsString("Permission denied: user=drillTestUser2, " +
-        "access=WRITE, inode=\"/drill_test_grp_0_755"));
+        "access=WRITE, inode=\"/" + tableWS));
 
     test("CREATE TABLE %s AS SELECT c_custkey, c_nationkey " +
         "FROM cp.`tpch/customer.parquet` ORDER BY c_custkey", tableName);
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java
index ce1b8c9..97ffb21 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/unit/TestOutputBatchSize.java
@@ -20,7 +20,6 @@ package org.apache.drill.exec.physical.unit;
 import org.apache.drill.shaded.guava.com.google.common.collect.ImmutableList;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.calcite.rel.core.JoinRelType;
-import org.apache.directory.api.util.Strings;
 import org.apache.drill.common.exceptions.ExecutionSetupException;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.common.expression.LogicalExpression;
@@ -328,7 +327,7 @@ public class TestOutputBatchSize extends PhysicalOpUnitTestBase {
         expr[i * 2] = "lower(" + baselineColumns[i] + ")";
         expr[i * 2 + 1] = baselineColumns[i];
       }
-      baselineValues[i] = (transfer ? testString : Strings.lowerCase(testString));
+      baselineValues[i] = (transfer ? testString : testString.toLowerCase());
     }
     jsonRow.append("}");
     StringBuilder batchString = new StringBuilder("[");
@@ -385,7 +384,7 @@ public class TestOutputBatchSize extends PhysicalOpUnitTestBase {
       expr[i * 2] = "lower(" + baselineColumns[i] + ")";
       expr[i * 2 + 1] = baselineColumns[i];
 
-      baselineValues[i] = Strings.lowerCase(testString);
+      baselineValues[i] = testString.toLowerCase();
     }
 
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java b/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java
index 2799838..04e59f6 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/work/batch/FileTest.java
@@ -43,7 +43,7 @@ public class FileTest {
     FSDataOutputStream out = fs.create(path);
     byte[] s = "hello world".getBytes();
     out.write(s);
-    out.sync();
+    out.hsync();
     FSDataInputStream in = fs.open(path);
     byte[] bytes = new byte[s.length];
     in.read(bytes);
@@ -60,7 +60,7 @@ public class FileTest {
       bytes = new byte[256*1024];
       Stopwatch watch = Stopwatch.createStarted();
       out.write(bytes);
-      out.sync();
+      out.hsync();
       long t = watch.elapsed(TimeUnit.MILLISECONDS);
       logger.info(String.format("Elapsed: %d. Rate %d.\n", t, (long) ((long) bytes.length * 1000L / t)));
     }
diff --git a/exec/jdbc-all/pom.xml b/exec/jdbc-all/pom.xml
index 13234fd..d523606 100644
--- a/exec/jdbc-all/pom.xml
+++ b/exec/jdbc-all/pom.xml
@@ -249,6 +249,7 @@
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-failsafe-plugin</artifactId>
+        <version>3.0.0-M3</version>
         <executions>
           <execution>
             <goals>
@@ -341,6 +342,7 @@
               <exclude>commons-beanutils:commons-beanutils-core:jar:*</exclude>
               <exclude>commons-beanutils:commons-beanutils:jar:*</exclude>
               <exclude>io.netty:netty-tcnative:jar:*</exclude>
+              <exclude>com.fasterxml.woodstox:woodstox-core:jar:*</exclude>
             </excludes>
           </artifactSet>
           <relocations>
@@ -403,6 +405,7 @@
             <relocation><pattern>org.apache.xpath.</pattern><shadedPattern>oadd.org.apache.xpath.</shadedPattern></relocation>
             <relocation><pattern>org.apache.zookeeper.</pattern><shadedPattern>oadd.org.apache.zookeeper.</shadedPattern></relocation>
             <relocation><pattern>org.apache.hadoop.</pattern><shadedPattern>oadd.org.apache.hadoop.</shadedPattern></relocation>
+            <relocation><pattern>com.fasterxml.woodstox.</pattern><shadedPattern>oadd.com.fasterxml.woodstox.</shadedPattern></relocation>
           </relocations>
           <transformers>
             <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
@@ -528,7 +531,7 @@
                   This is likely due to you adding new dependencies to a java-exec and not updating the excludes in this module. This is important as it minimizes the size of the dependency of Drill application users.
 
                   </message>
-                  <maxsize>41000000</maxsize>
+                  <maxsize>42600000</maxsize>
                   <minsize>15000000</minsize>
                   <files>
                    <file>${project.build.directory}/drill-jdbc-all-${project.version}.jar</file>
diff --git a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java
index bc31f99..eaedf56 100644
--- a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java
+++ b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/DrillbitClassLoader.java
@@ -26,16 +26,16 @@ import java.util.List;
 
 public class DrillbitClassLoader extends URLClassLoader {
 
-  public DrillbitClassLoader() {
+  DrillbitClassLoader() {
     super(URLS);
   }
 
   private static final URL[] URLS;
 
   static {
-    ArrayList<URL> urlList = new ArrayList<URL>();
+    ArrayList<URL> urlList = new ArrayList<>();
     final String classPath = System.getProperty("app.class.path");
-    final String[] st = fracture(classPath, File.pathSeparator);
+    final String[] st = fracture(classPath);
     final int l = st.length;
     for (int i = 0; i < l; i++) {
       try {
@@ -49,10 +49,7 @@ public class DrillbitClassLoader extends URLClassLoader {
     }
     urlList.toArray(new URL[urlList.size()]);
 
-    List<URL> urls = new ArrayList<>();
-    for (URL url : urlList) {
-      urls.add(url);
-    }
+    List<URL> urls = new ArrayList<>(urlList);
     URLS = urls.toArray(new URL[urls.size()]);
   }
 
@@ -61,21 +58,21 @@ public class DrillbitClassLoader extends URLClassLoader {
    *
    * Taken from Apache Harmony
    */
-  private static String[] fracture(String str, String sep) {
+  private static String[] fracture(String str) {
     if (str.length() == 0) {
       return new String[0];
     }
-    ArrayList<String> res = new ArrayList<String>();
+    ArrayList<String> res = new ArrayList<>();
     int in = 0;
     int curPos = 0;
-    int i = str.indexOf(sep);
-    int len = sep.length();
+    int i = str.indexOf(File.pathSeparator);
+    int len = File.pathSeparator.length();
     while (i != -1) {
       String s = str.substring(curPos, i);
       res.add(s);
       in++;
       curPos = i + len;
-      i = str.indexOf(sep, curPos);
+      i = str.indexOf(File.pathSeparator, curPos);
     }
 
     len = str.length();
diff --git a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
index 99f399d..19a4be8 100644
--- a/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
+++ b/exec/jdbc-all/src/test/java/org/apache/drill/jdbc/ITTestShadedJar.java
@@ -105,6 +105,7 @@ public class ITTestShadedJar extends BaseTest {
       super.failed(e, description);
       done();
       runMethod("failed", description);
+      logger.error("Check whether this test was running within 'integration-test' Maven phase");
     }
 
     private void done() {
@@ -235,8 +236,8 @@ public class ITTestShadedJar extends BaseTest {
 
   private static void runWithLoader(String name, ClassLoader loader) throws Exception {
     Class<?> clazz = loader.loadClass(ITTestShadedJar.class.getName() + "$" + name);
-    Object o = clazz.getDeclaredConstructors()[0].newInstance(loader);
-    clazz.getMethod("go").invoke(o);
+    Object instance = clazz.getDeclaredConstructors()[0].newInstance(loader);
+    clazz.getMethod("go").invoke(instance);
   }
 
   public abstract static class AbstractLoaderThread extends Thread {
@@ -283,7 +284,7 @@ public class ITTestShadedJar extends BaseTest {
       // loader.loadClass("org.apache.drill.exec.exception.SchemaChangeException");
 
       // execute a single query to make sure the drillbit is fully up
-      clazz.getMethod("testNoResult", String.class, new Object[] {}.getClass())
+      clazz.getMethod("testNoResult", String.class, Object[].class)
           .invoke(null, "select * from (VALUES 1)", new Object[] {});
 
       SEM.release();
diff --git a/pom.xml b/pom.xml
index c551747..e012ec2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -82,8 +82,8 @@
       Apache Hive 2.3.2. If the version is changed, make sure the jars and their dependencies are updated.
     -->
     <hive.version>2.3.2</hive.version>
-    <hadoop.version>2.7.4</hadoop.version>
-    <hbase.version>2.1.1</hbase.version>
+    <hadoop.version>3.0.3</hadoop.version>
+    <hbase.version>2.1.4</hbase.version>
     <fmpp.version>1.0</fmpp.version>
     <freemarker.version>2.3.28</freemarker.version>
     <javassist.version>3.25.0-GA</javassist.version>
@@ -511,7 +511,7 @@
               <rules>
                 <bannedDependencies>
                   <excludes>
-                    <exclude>commons-logging</exclude>
+                    <!--<exclude>commons-logging</exclude>-->
                     <exclude>javax.servlet:servlet-api</exclude>
                     <exclude>org.mortbay.jetty:servlet-api</exclude>
                     <exclude>org.mortbay.jetty:servlet-api-2.5</exclude>
@@ -1006,10 +1006,17 @@
     <dependency>
       <groupId>ch.qos.logback</groupId>
       <artifactId>logback-classic</artifactId>
-      <version>1.0.13</version>
+      <version>${logback.version}</version>
       <scope>test</scope>
     </dependency>
     <dependency>
+      <groupId>ch.qos.logback</groupId>
+      <artifactId>logback-core</artifactId>
+      <version>${logback.version}</version>
+      <scope>test</scope>
+    </dependency>
+
+    <dependency>
       <groupId>de.huxhorn.lilith</groupId>
       <artifactId>de.huxhorn.lilith.logback.appender.multiplex-classic</artifactId>
       <version>0.9.44</version>
@@ -1055,6 +1062,26 @@
   <dependencyManagement>
     <dependencies>
       <dependency>
+          <groupId>org.slf4j</groupId>
+          <artifactId>slf4j-api</artifactId>
+          <version>${dep.slf4j.version}</version>
+      </dependency>
+      <dependency>
+          <groupId>org.slf4j</groupId>
+          <artifactId>jul-to-slf4j</artifactId>
+          <version>${dep.slf4j.version}</version>
+      </dependency>
+      <dependency>
+          <groupId>org.slf4j</groupId>
+          <artifactId>jcl-over-slf4j</artifactId>
+          <version>${dep.slf4j.version}</version>
+      </dependency>
+      <dependency>
+          <groupId>org.slf4j</groupId>
+          <artifactId>log4j-over-slf4j</artifactId>
+          <version>${dep.slf4j.version}</version>
+      </dependency>
+      <dependency>
         <groupId>${calcite.groupId}</groupId>
         <artifactId>calcite-core</artifactId>
         <version>${calcite.version}</version>
@@ -1886,14 +1913,14 @@
                 <artifactId>mockito-all</artifactId>
                 <groupId>org.mockito</groupId>
               </exclusion>
-              <exclusion>
-                <artifactId>commons-logging-api</artifactId>
-                <groupId>commons-logging</groupId>
-              </exclusion>
-              <exclusion>
-                <artifactId>commons-logging</artifactId>
-                <groupId>commons-logging</groupId>
-              </exclusion>
+              <!--<exclusion>-->
+                <!--<artifactId>commons-logging-api</artifactId>-->
+                <!--<groupId>commons-logging</groupId>-->
+              <!--</exclusion>-->
+              <!--<exclusion>-->
+                <!--<artifactId>commons-logging</artifactId>-->
+                <!--<groupId>commons-logging</groupId>-->
+              <!--</exclusion>-->
               <exclusion>
                 <groupId>com.sun.jersey</groupId>
                 <artifactId>jersey-core</artifactId>
@@ -1936,6 +1963,7 @@
               </exclusion>
             </exclusions>
           </dependency>
+          <!-- Hadoop Test Dependencies -->
           <dependency>
             <groupId>org.apache.hadoop</groupId>
             <artifactId>hadoop-common</artifactId>
@@ -2035,6 +2063,76 @@
           </dependency>
           <dependency>
             <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-hdfs</artifactId>
+            <version>${hadoop.version}</version>
+            <scope>test</scope>
+            <exclusions>
+              <exclusion>
+                <groupId>commons-logging</groupId>
+                <artifactId>commons-logging</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>org.mortbay.jetty</groupId>
+                <artifactId>servlet-api</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>javax.servlet</groupId>
+                <artifactId>servlet-api</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>io.netty</groupId>
+                <artifactId>netty-all</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>io.netty</groupId>
+                <artifactId>netty</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>log4j</groupId>
+                <artifactId>log4j</artifactId>
+              </exclusion>
+            </exclusions>
+          </dependency>
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-hdfs</artifactId>
+            <version>${hadoop.version}</version>
+            <scope>test</scope>
+            <classifier>tests</classifier>
+            <exclusions>
+              <exclusion>
+                <groupId>commons-logging</groupId>
+                <artifactId>commons-logging</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>org.mortbay.jetty</groupId>
+                <artifactId>servlet-api</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>javax.servlet</groupId>
+                <artifactId>servlet-api</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>log4j</groupId>
+                <artifactId>log4j</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>com.sun.jersey</groupId>
+                <artifactId>jersey-core</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>io.netty</groupId>
+                <artifactId>netty-all</artifactId>
+              </exclusion>
+              <exclusion>
+                <groupId>io.netty</groupId>
+                <artifactId>netty</artifactId>
+              </exclusion>
+            </exclusions>
+          </dependency>
+          <!-- Hadoop Test Dependencies -->
+          <dependency>
+            <groupId>org.apache.hadoop</groupId>
             <artifactId>hadoop-client</artifactId>
             <version>${hadoop.version}</version>
             <exclusions>
@@ -2541,74 +2639,6 @@
             <version>${jersey.version}</version>
           </dependency>
           <!--/GlassFish Jersey dependecies-->
-
-          <!-- Test Dependencies -->
-          <dependency>
-            <groupId>org.apache.hadoop</groupId>
-            <artifactId>hadoop-hdfs</artifactId>
-            <version>${hadoop.version}</version>
-            <scope>test</scope>
-            <exclusions>
-              <exclusion>
-                <groupId>commons-logging</groupId>
-                <artifactId>commons-logging</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>org.mortbay.jetty</groupId>
-                <artifactId>servlet-api</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>javax.servlet</groupId>
-                <artifactId>servlet-api</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>io.netty</groupId>
-                <artifactId>netty-all</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>io.netty</groupId>
-                <artifactId>netty</artifactId>
-              </exclusion>
-            </exclusions>
-          </dependency>
-          <!-- Test Dependencies -->
-          <dependency>
-            <groupId>org.apache.hadoop</groupId>
-            <artifactId>hadoop-hdfs</artifactId>
-            <version>${hadoop.version}</version>
-            <scope>test</scope>
-            <classifier>tests</classifier>
-            <exclusions>
-              <exclusion>
-                <groupId>commons-logging</groupId>
-                <artifactId>commons-logging</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>org.mortbay.jetty</groupId>
-                <artifactId>servlet-api</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>javax.servlet</groupId>
-                <artifactId>servlet-api</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>log4j</groupId>
-                <artifactId>log4j</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>com.sun.jersey</groupId>
-                <artifactId>jersey-core</artifactId>
-              </exclusion>
-              <exclusion>
-                <groupId>io.netty</groupId>
-                <artifactId>netty-all</artifactId>
-              </exclusion>
-              <exclusion>
-                  <groupId>io.netty</groupId>
-                  <artifactId>netty</artifactId>
-              </exclusion>
-            </exclusions>
-          </dependency>
         </dependencies>
       </dependencyManagement>
     </profile>


[drill] 10/11: DRILL-7221: Exclude debug files generated by maven debug option from jar

Posted by vo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

volodymyr pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/drill.git

commit 086ecbd4d3aee96b942c8110c9c948dcd7435b7d
Author: Volodymyr Vysotskyi <vv...@gmail.com>
AuthorDate: Mon Dec 2 18:46:51 2019 +0200

    DRILL-7221: Exclude debug files generated by maven debug option from jar
    
    closes #1915
---
 pom.xml | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/pom.xml b/pom.xml
index 3ea8a95..68f75a2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -438,6 +438,9 @@
             <exclude>**/logging.properties</exclude>
             <exclude>**/logback.out.xml</exclude>
             <exclude>**/logback.xml</exclude>
+            <!-- Excludes files generated by maven debug option -->
+            <exclude>**/javac.sh</exclude>
+            <exclude>**/org.codehaus.plexus.compiler.javac.JavacCompiler*</exclude>
           </excludes>
           <archive>
             <index>true</index>