You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@drill.apache.org by dz...@apache.org on 2022/05/12 04:55:01 UTC

[drill] branch 1.20 updated: DRILL-8221: Collect bug fixes from master for Drill 1.20.1 (#2545)

This is an automated email from the ASF dual-hosted git repository.

dzamo pushed a commit to branch 1.20
in repository https://gitbox.apache.org/repos/asf/drill.git


The following commit(s) were added to refs/heads/1.20 by this push:
     new 2fa849d79f DRILL-8221: Collect bug fixes from master for Drill 1.20.1 (#2545)
2fa849d79f is described below

commit 2fa849d79fc517378bdd282e6a6d157566192626
Author: James Turton <91...@users.noreply.github.com>
AuthorDate: Thu May 12 06:54:57 2022 +0200

    DRILL-8221: Collect bug fixes from master for Drill 1.20.1 (#2545)
    
    * Prepare for the next bugfix iteration.
    
    * SAS Reader fixes (#2472)
    
    Co-authored-by: pseudomo <ps...@yandex.ru>
    
    * Add jackson-bom (#2454)
    
    * [DRILL-8150] log4j 2.17.2 in format-excel (#2476)
    
    * DRILL-8151: Add support for more ElasticSearch and Cassandra data types (#2477)
    
    * DRILL-8154: Upgrade to poi 5.2.1 (#2480)
    
    * DRILL-8145: Fix flaky TestDrillbitResilience#memoryLeaksWhenCancelled test case (#2471)
    
    * Set Brotli codec jar and test to occur only on Linux amd64.
    
    * DRILL-8145: Fix flaky TestDrillbitResilience#memoryLeaksWhenCancelled test case
    
    - changing timeout for TestDrillbitResilience tests
    - timing tuning for memoryLeaksWhenCancelled
    - update TestContainers version
    - -DforkCount=1 for Travis Maven build
    - directMemoryMb: 2500 -> 4500 leads to less occasinal test failures
    
    Co-authored-by: James Turton <ja...@somecomputer.xyz>
    
    * [MINOR UPDATE] Add Stalebot Config (#2487)
    
    * [MINOR UPDATE] Fix license for Stalebot Config (#2488)
    
    * DRILL-8156: Declare and chown a /data VOLUME in the Drill Dockerfile (#2491)
    
    * Add a mountpoint and VOLUME for local storage to Dockerfile.
    
    * Address review comments concerning layer ordering.
    
    * Fix image size blowup by moving chmod to intermediate container.
    
    * Combine RUN commands in Dockerfile.
    
    * DRILL-8168: Do not duplicate attempts to impersonate a user in the REST API (#2495)
    
    * DRILL-8172: Use the specified memory usage for Travis CI (#2500)
    
    * DRILL-8165: Upgrade liquibase because of CVE-2022-0839 (#2497)
    
    * Create codeql-analysis.yml
    
    * Update codeql-analysis.yml
    
    Removed cpp from code analysis
    
    * [MINOR UPDATE] Add license to CodeQL YAML (#2501)
    
    * DRILL-8176: Upgrade Jackson Due to CVE-2020-36518 (#2504)
    
    * DRILL-8164: Upgrade metadata-extractor because of CVE-2022-24613 (#2493)
    
    * DRILL-8164: Upgrade metadata-extractor because of CVE-2022-24613
    
    * Update the ProfileCopyright tag name
    
    * Update the mov format name
    
    * Add the QuickTime.Rotation tag
    
    * Bump metadata-extractor to 2.17.0
    
    * DRILL-8178: Bump AWS Libraries to Latest Version (#2506)
    
    * DRILL-8175: Update Drill release script after 1.20 (#2503)
    
    * Set DRILL_PID_DIR in Dockerfile to writable location for distributed mode.
    
    Some users of the images built from this Dockerfile customise
    them so that they launch Drill in distributed mode instead of
    embedded mode.  This change saves them from having to set
    DRILL_PID_DIR themselves in order to succeed.
    
    * Update release script and instructions after the release of 1.20.
    
    - Add support for specifying a build profile such as "hadoop-2".
    - Update instuctions for the Drill web site.
    - Update instructions for uploading RCs (no more home.apache)
    - Some fixes.
    
    * DRILL-8176: minor issue in previous jackson bom (#2508)
    
    * minor issue in previous jackson bom
    
    * Update pom.xml
    
    * DRILL-8187: Dialect factory returns ANSI SQL dialect for BigQuery (#2513)
    
    * DRILL-8192: Cassandra queries fail when enabled Mongo plugin (#2518)
    
    * DRILL-8013: Drill attempts to push "$SUM0" to JDBC storage plugin for AVG (#2521)
    
    * DRILL-8194: Fix the function of REPLACE throws IndexOutOfBoundsException If text's length is more than previously applied (#2522)
    
    * DRILL-8200: Update Hadoop libs to â‰Ĩ 3.2.3 for CVE-2022-26612 (#2525)
    
    * Remove pointless Buffer casts.
    
    Compiling Drill with JDK > 8 will still result in ByteBuffer <-> Buffer cast
    exceptions at runtime when running on JDK 8 even though maven.target.version
    is set to 8. Setting maven.compiler.release to 8 solves the Buffer casts
    but raises a compilation error of package sun.security.jgss does not exist
    for JDK 8. There were a few handwritten casts to avoid the Buffer casting
    issue but many instances are not covered so the few reverted in this commit
    achieve nothing.
    
    * Update Hadoop to 3.2.3.
    
    * [MINOR UPDATE] Update AWS Java SDK to 1.12.211
    
    * DRILL-8219: Handle null catalog names returned by DB2 in storage-jdbc. (#2542)
    
    Co-authored-by: pseudomo <yu...@mail.ru>
    Co-authored-by: pseudomo <ps...@yandex.ru>
    Co-authored-by: Rymar Maksym <ri...@gmail.com>
    Co-authored-by: PJ Fanning <pj...@users.noreply.github.com>
    Co-authored-by: Volodymyr Vysotskyi <vv...@gmail.com>
    Co-authored-by: Vitalii Diravka <vi...@apache.org>
    Co-authored-by: Charles S. Givre <cg...@apache.org>
    Co-authored-by: luoc <lu...@apache.org>
    Co-authored-by: xurenhe <xu...@gmail.com>
---
 .github/stale.yml                                  |  35 +++++
 .github/workflows/codeql-analysis.yml              |  88 +++++++++++
 .travis.yml                                        |   5 +-
 Dockerfile                                         |  30 ++--
 common/pom.xml                                     |   2 +-
 contrib/data/pom.xml                               |   2 +-
 contrib/data/tpch-sample-data/pom.xml              |   2 +-
 contrib/format-esri/pom.xml                        |   2 +-
 contrib/format-excel/pom.xml                       |  11 +-
 contrib/format-hdf5/pom.xml                        |   2 +-
 contrib/format-httpd/pom.xml                       |   2 +-
 contrib/format-iceberg/pom.xml                     |   2 +-
 .../iceberg/plan/IcebergPluginImplementor.java     |   7 +
 contrib/format-image/pom.xml                       |   3 +-
 .../exec/store/image/GenericMetadataReader.java    |  10 +-
 .../format-image/src/test/resources/image/eps.json |   2 +-
 .../src/test/resources/image/jpeg.json             |   2 +-
 .../format-image/src/test/resources/image/mov.json |   5 +-
 .../src/test/resources/image/tiff.json             |   2 +-
 contrib/format-ltsv/pom.xml                        |   2 +-
 contrib/format-maprdb/pom.xml                      |   2 +-
 contrib/format-pcapng/pom.xml                      |   2 +-
 contrib/format-pdf/pom.xml                         |   2 +-
 contrib/format-sas/pom.xml                         |   2 +-
 .../drill/exec/store/sas/SasBatchReader.java       |  87 ++++++-----
 .../apache/drill/exec/store/sas/TestSasReader.java |  18 +--
 contrib/format-spss/pom.xml                        |   2 +-
 contrib/format-syslog/pom.xml                      |   2 +-
 contrib/format-xml/pom.xml                         |   2 +-
 contrib/pom.xml                                    |   2 +-
 contrib/storage-cassandra/pom.xml                  |   4 +-
 .../cassandra/CassandraColumnConverterFactory.java | 144 +++++++++++++++++
 .../CassandraColumnConverterFactoryProvider.java   |  25 ++-
 .../plan/CassandraEnumerablePrelContext.java       |   7 +
 .../exec/store/cassandra/BaseCassandraTest.java    |   8 +-
 .../exec/store/cassandra/CassandraQueryTest.java   |  91 ++++++++---
 ...tCassandraSuit.java => TestCassandraSuite.java} |   6 +-
 .../src/test/resources/queries.cql                 |  18 ++-
 contrib/storage-druid/pom.xml                      |   2 +-
 contrib/storage-elasticsearch/pom.xml              |  54 +++----
 .../ElasticsearchColumnConverterFactory.java       |  65 ++++++++
 ...lasticsearchColumnConverterFactoryProvider.java |  25 ++-
 .../plan/ElasticSearchEnumerablePrelContext.java   |   7 +
 .../elasticsearch/ElasticComplexTypesTest.java     |   9 +-
 .../store/elasticsearch/ElasticInfoSchemaTest.java |   8 +-
 .../store/elasticsearch/ElasticSearchPlanTest.java |   8 +-
 .../elasticsearch/ElasticSearchQueryTest.java      |  77 +++++----
 .../elasticsearch/TestElasticsearchSuite.java}     |  46 +++---
 contrib/storage-hbase/pom.xml                      |   2 +-
 contrib/storage-hive/core/pom.xml                  |   2 +-
 contrib/storage-hive/hive-exec-shade/pom.xml       |   2 +-
 contrib/storage-hive/pom.xml                       |   2 +-
 contrib/storage-http/pom.xml                       |   2 +-
 contrib/storage-jdbc/pom.xml                       |   2 +-
 .../drill/exec/store/jdbc/JdbcCatalogSchema.java   |   5 +
 .../store/jdbc/TestJdbcPluginWithPostgres.java     |  12 ++
 contrib/storage-kafka/pom.xml                      |   2 +-
 contrib/storage-kudu/pom.xml                       |   2 +-
 contrib/storage-mongo/pom.xml                      |   2 +-
 .../store/mongo/plan/MongoPluginImplementor.java   |   7 +
 contrib/storage-opentsdb/pom.xml                   |   2 +-
 contrib/storage-phoenix/README.md                  |   2 +-
 contrib/storage-phoenix/pom.xml                    |   2 +-
 contrib/storage-splunk/pom.xml                     |   2 +-
 contrib/udfs/pom.xml                               |   2 +-
 distribution/pom.xml                               |   4 +-
 .../src/main/resources/winutils/hadoop.dll         | Bin 96256 -> 88576 bytes
 .../src/main/resources/winutils/winutils.exe       | Bin 118784 -> 118784 bytes
 docs/dev/HadoopWinutils.md                         |   2 +-
 docs/dev/Release.md                                |  73 +++++----
 drill-yarn/pom.xml                                 |   2 +-
 exec/java-exec/pom.xml                             |  47 ++++--
 .../java/org/apache/drill/exec/ExecConstants.java  |   2 +-
 .../drill/exec/expr/fn/impl/StringFunctions.java   |   7 +-
 .../planner/logical/DrillReduceAggregatesRule.java |  10 +-
 ...CalciteSqlSumEmptyIsZeroAggFunctionWrapper.java | 173 +++++++++++++++++++++
 .../drill/exec/planner/sql/DrillOperatorTable.java |   7 +-
 .../apache/drill/exec/record/ColumnConverter.java  |   2 +-
 .../drill/exec/server/rest/BaseQueryRunner.java    |  55 ++++---
 .../drill/exec/server/rest/DrillRestServer.java    |   7 +
 ...xt.java => ColumnConverterFactoryProvider.java} |  22 +--
 ... => DefaultColumnConverterFactoryProvider.java} |  24 ++-
 .../store/enumerable/EnumerableBatchCreator.java   |   2 +-
 .../exec/store/enumerable/EnumerableGroupScan.java |  13 +-
 .../store/enumerable/EnumerableRecordReader.java   |   8 +-
 .../exec/store/enumerable/EnumerableSubScan.java   |   9 +-
 .../exec/store/enumerable/plan/EnumerablePrel.java |   7 +-
 .../enumerable/plan/EnumerablePrelContext.java     |   6 +
 .../exec/store/plan/AbstractPluginImplementor.java |   8 +
 .../parquet/hadoop/ColumnChunkIncReadStore.java    |   7 +-
 .../exec/expr/fn/impl/TestStringFunctions.java     |  14 ++
 .../physical/impl/writer/TestParquetWriter.java    |  12 +-
 .../drill/exec/server/TestDrillbitResilience.java  |  11 +-
 exec/jdbc-all/pom.xml                              |   2 +-
 exec/jdbc/pom.xml                                  |  20 +--
 exec/memory/base/pom.xml                           |   2 +-
 exec/memory/pom.xml                                |   2 +-
 exec/pom.xml                                       |   2 +-
 exec/rpc/pom.xml                                   |   2 +-
 exec/vector/pom.xml                                |   2 +-
 logical/pom.xml                                    |   2 +-
 metastore/iceberg-metastore/pom.xml                |   2 +-
 metastore/metastore-api/pom.xml                    |   2 +-
 metastore/mongo-metastore/pom.xml                  |   2 +-
 metastore/pom.xml                                  |   2 +-
 metastore/rdbms-metastore/pom.xml                  |   4 +-
 .../drill/metastore/rdbms/RdbmsMetastore.java      |   2 +
 pom.xml                                            |  59 ++-----
 protocol/pom.xml                                   |   2 +-
 tools/fmpp/pom.xml                                 |   2 +-
 tools/pom.xml                                      |   2 +-
 tools/release-scripts/release.sh                   |  71 +++++----
 112 files changed, 1201 insertions(+), 510 deletions(-)

diff --git a/.github/stale.yml b/.github/stale.yml
new file mode 100644
index 0000000000..42fe480290
--- /dev/null
+++ b/.github/stale.yml
@@ -0,0 +1,35 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Number of days of inactivity before an issue becomes stale
+daysUntilStale: 120
+# Number of days of inactivity before a stale issue is closed
+daysUntilClose: 14
+# Issues with these labels will never be considered stale
+exemptLabels:
+  - pinned
+  - security
+# Label to use when marking an issue as stale
+staleLabel: wontfix
+# Comment to post when marking an issue as stale. Set to `false` to disable
+markComment: >
+  This issue has been automatically marked as stale because it has not had
+  recent activity. It will be closed if no further activity occurs. Thank you
+  for your contributions.
+# Comment to post when closing a stale issue. Set to `false` to disable
+closeComment: false
diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml
new file mode 100644
index 0000000000..7c1cc32ddd
--- /dev/null
+++ b/.github/workflows/codeql-analysis.yml
@@ -0,0 +1,88 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# For most projects, this workflow file will not need changing; you simply need
+# to commit it to your repository.
+#
+# You may wish to alter this file to override the set of languages analyzed,
+# or to provide custom queries or build logic.
+#
+# ******** NOTE ********
+# We have attempted to detect the languages in your repository. Please check
+# the `language` matrix defined below to confirm you have the correct set of
+# supported CodeQL languages.
+#
+name: "CodeQL"
+
+on:
+  push:
+    branches: [ master ]
+  pull_request:
+    # The branches below must be a subset of the branches above
+    branches: [ master ]
+  schedule:
+    - cron: '33 21 * * 5'
+
+jobs:
+  analyze:
+    name: Analyze
+    runs-on: ubuntu-latest
+    permissions:
+      actions: read
+      contents: read
+      security-events: write
+
+    strategy:
+      fail-fast: false
+      matrix:
+        language: [ 'java', 'javascript' ]
+        # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
+        # Learn more about CodeQL language support at https://git.io/codeql-language-support
+
+    steps:
+    - name: Checkout repository
+      uses: actions/checkout@v2
+
+    # Initializes the CodeQL tools for scanning.
+    - name: Initialize CodeQL
+      uses: github/codeql-action/init@v1
+      with:
+        languages: ${{ matrix.language }}
+        # If you wish to specify custom queries, you can do so here or in a config file.
+        # By default, queries listed here will override any specified in a config file.
+        # Prefix the list here with "+" to use these queries and those in the config file.
+        # queries: ./path/to/local/query, your-org/your-repo/queries@main
+
+    # Autobuild attempts to build any compiled languages  (C/C++, C#, or Java).
+    # If this step fails, then you should remove it and run the build manually (see below)
+    - name: Autobuild
+      uses: github/codeql-action/autobuild@v1
+
+    # ℹī¸ Command-line programs to run using the OS shell.
+    # 📚 https://git.io/JvXDl
+
+    # ✏ī¸ If the Autobuild fails above, remove it and uncomment the following three lines
+    #    and modify them (or add more) to build your code if your project
+    #    uses a compiled language
+
+    #- run: |
+    #   make bootstrap
+    #   make release
+
+    - name: Perform CodeQL Analysis
+      uses: github/codeql-action/analyze@v1
diff --git a/.travis.yml b/.travis.yml
index 957accd9b4..c5447276a6 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -45,6 +45,9 @@ cache:
 before_install:
   - export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-arm64"
   - export PATH="$JAVA_HOME/bin:$PATH"
+  - export MEMORYMB=2048
+  - export DIRECTMEMORYMB=5120
+  - free -m
   - java -version
   - mvn -version
   - git fetch --unshallow
@@ -71,7 +74,7 @@ install:
   # For protobuf phase: builds Drill project, performs license checkstyle goal and regenerates Java and C++ Protobuf files
   - |
     if [ $PHASE = "tests" ]; then \
-      mvn install --batch-mode --no-transfer-progress \
+      mvn install --batch-mode --no-transfer-progress -DforkCount=1 -DmemoryMb=$MEMORYMB -DdirectMemoryMb=$DIRECTMEMORYMB \
         -DexcludedGroups="org.apache.drill.categories.SlowTest,org.apache.drill.categories.UnlikelyTest,org.apache.drill.categories.SecurityTest"; \
     elif [ $PHASE = "build_checkstyle_protobuf" ]; then \
       MAVEN_OPTS="-Xms1G -Xmx1G" mvn install --no-transfer-progress -Drat.skip=false -Dlicense.skip=false --batch-mode -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -DskipTests=true -Dmaven.javadoc.skip=true -Dmaven.source.skip=true && \
diff --git a/Dockerfile b/Dockerfile
index 996fb4f3ed..30582dce4b 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -49,25 +49,33 @@ RUN mvn -Dmaven.artifact.threads=5 -T1C clean install -DskipTests
 # Get project version and copy built binaries into /opt/drill directory
 RUN VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec) \
  && mkdir /opt/drill \
- && mv distribution/target/apache-drill-${VERSION}/apache-drill-${VERSION}/* /opt/drill
+ && mv distribution/target/apache-drill-${VERSION}/apache-drill-${VERSION}/* /opt/drill \
+ && chmod -R +r /opt/drill
 
 # Target image
 
 # Set the BASE_IMAGE build arg when you invoke docker build.  
 FROM $BASE_IMAGE
 
-ENV DRILL_HOME=/opt/drill DRILL_USER=drilluser
+# Starts Drill in embedded mode and connects to Sqlline
+ENTRYPOINT $DRILL_HOME/bin/drill-embedded
 
-RUN mkdir $DRILL_HOME
+ENV DRILL_HOME=/opt/drill
+ENV DRILL_USER=drilluser
+ENV DRILL_USER_HOME=/var/lib/drill
+ENV DRILL_PID_DIR=$DRILL_USER_HOME
+ENV DRILL_LOG_DIR=$DRILL_USER_HOME/log
+ENV DATA_VOL=/data
 
-RUN groupadd -g 999 $DRILL_USER \
- && useradd -r -u 999 -g $DRILL_USER $DRILL_USER -m -d /var/lib/drill \
- && chown -R $DRILL_USER: $DRILL_HOME
+RUN mkdir $DRILL_HOME $DATA_VOL \
+ && groupadd -g 999 $DRILL_USER \
+ && useradd -r -u 999 -g $DRILL_USER $DRILL_USER -m -d $DRILL_USER_HOME \
+ && chown $DRILL_USER: $DATA_VOL
 
-USER $DRILL_USER
+# A Docker volume where users may store persistent data, e.g. persistent Drill
+# config by specifying a Drill BOOT option of sys.store.provider.local.path: "/data".
+VOLUME $DATA_VOL
 
-COPY --from=build --chown=$DRILL_USER /opt/drill $DRILL_HOME
-
-# Starts Drill in embedded mode and connects to Sqlline
-ENTRYPOINT $DRILL_HOME/bin/drill-embedded
+COPY --from=build /opt/drill $DRILL_HOME
 
+USER $DRILL_USER
diff --git a/common/pom.xml b/common/pom.xml
index 9b3cc49575..97f341bc50 100644
--- a/common/pom.xml
+++ b/common/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-common</artifactId>
diff --git a/contrib/data/pom.xml b/contrib/data/pom.xml
index 3d41c82a58..df7f76a640 100644
--- a/contrib/data/pom.xml
+++ b/contrib/data/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <groupId>org.apache.drill.contrib.data</groupId>
diff --git a/contrib/data/tpch-sample-data/pom.xml b/contrib/data/tpch-sample-data/pom.xml
index 54492c400e..2ec53974b5 100644
--- a/contrib/data/tpch-sample-data/pom.xml
+++ b/contrib/data/tpch-sample-data/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-data-parent</artifactId>
     <groupId>org.apache.drill.contrib.data</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>tpch-sample-data</artifactId>
diff --git a/contrib/format-esri/pom.xml b/contrib/format-esri/pom.xml
index b04a8d8489..a038c15519 100644
--- a/contrib/format-esri/pom.xml
+++ b/contrib/format-esri/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-esri</artifactId>
diff --git a/contrib/format-excel/pom.xml b/contrib/format-excel/pom.xml
index f73b7635cb..f2eba4cf9f 100644
--- a/contrib/format-excel/pom.xml
+++ b/contrib/format-excel/pom.xml
@@ -24,14 +24,15 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-excel</artifactId>
   <name>Drill : Contrib : Format : Excel</name>
 
   <properties>
-    <poi.version>5.2.0</poi.version>
+    <poi.version>5.2.1</poi.version>
+    <log4j.version>2.17.2</log4j.version>
   </properties>
   <dependencies>
     <dependency>
@@ -52,17 +53,17 @@
     <dependency>
       <groupId>com.github.pjfanning</groupId>
       <artifactId>excel-streaming-reader</artifactId>
-      <version>3.4.0</version>
+      <version>3.6.0</version>
     </dependency>
     <dependency>
       <groupId>org.apache.logging.log4j</groupId>
       <artifactId>log4j-api</artifactId>
-      <version>2.17.1</version>
+      <version>${log4j.version}</version>
     </dependency>
     <dependency>
       <groupId>org.apache.logging.log4j</groupId>
       <artifactId>log4j-to-slf4j</artifactId>
-      <version>2.17.1</version>
+      <version>${log4j.version}</version>
     </dependency>
     <!-- Test dependencies -->
     <dependency>
diff --git a/contrib/format-hdf5/pom.xml b/contrib/format-hdf5/pom.xml
index df0708ab35..a27d90fab8 100644
--- a/contrib/format-hdf5/pom.xml
+++ b/contrib/format-hdf5/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-hdf5</artifactId>
diff --git a/contrib/format-httpd/pom.xml b/contrib/format-httpd/pom.xml
index 2eda20db0d..2d74b791f2 100644
--- a/contrib/format-httpd/pom.xml
+++ b/contrib/format-httpd/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <artifactId>drill-format-httpd</artifactId>
   <name>Drill : Contrib : Format : Httpd/Nginx Access Log</name>
diff --git a/contrib/format-iceberg/pom.xml b/contrib/format-iceberg/pom.xml
index 9ff02c562e..49f777776d 100644
--- a/contrib/format-iceberg/pom.xml
+++ b/contrib/format-iceberg/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-iceberg-format</artifactId>
diff --git a/contrib/format-iceberg/src/main/java/org/apache/drill/exec/store/iceberg/plan/IcebergPluginImplementor.java b/contrib/format-iceberg/src/main/java/org/apache/drill/exec/store/iceberg/plan/IcebergPluginImplementor.java
index 3d85b54a79..a4863273d8 100644
--- a/contrib/format-iceberg/src/main/java/org/apache/drill/exec/store/iceberg/plan/IcebergPluginImplementor.java
+++ b/contrib/format-iceberg/src/main/java/org/apache/drill/exec/store/iceberg/plan/IcebergPluginImplementor.java
@@ -32,6 +32,8 @@ import org.apache.drill.exec.planner.common.DrillLimitRelBase;
 import org.apache.drill.exec.planner.logical.DrillOptiq;
 import org.apache.drill.exec.planner.logical.DrillParseContext;
 import org.apache.drill.exec.planner.physical.PrelUtil;
+import org.apache.drill.exec.store.StoragePlugin;
+import org.apache.drill.exec.store.dfs.FileSystemPlugin;
 import org.apache.drill.exec.store.iceberg.IcebergGroupScan;
 import org.apache.drill.exec.store.plan.AbstractPluginImplementor;
 import org.apache.drill.exec.store.plan.rel.PluginFilterRel;
@@ -134,6 +136,11 @@ public class IcebergPluginImplementor extends AbstractPluginImplementor {
     return true;
   }
 
+  @Override
+  protected Class<? extends StoragePlugin> supportedPlugin() {
+    return FileSystemPlugin.class;
+  }
+
   @Override
   public boolean splitProject(Project project) {
     return true;
diff --git a/contrib/format-image/pom.xml b/contrib/format-image/pom.xml
index ceacc9bd8c..6764c30aa0 100644
--- a/contrib/format-image/pom.xml
+++ b/contrib/format-image/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-image</artifactId>
@@ -39,6 +39,7 @@
     <dependency>
       <groupId>com.drewnoakes</groupId>
       <artifactId>metadata-extractor</artifactId>
+      <version>2.17.0</version>
     </dependency>
 
     <!-- Test dependencies -->
diff --git a/contrib/format-image/src/main/java/org/apache/drill/exec/store/image/GenericMetadataReader.java b/contrib/format-image/src/main/java/org/apache/drill/exec/store/image/GenericMetadataReader.java
index 4268fcf2c9..3598658cb3 100644
--- a/contrib/format-image/src/main/java/org/apache/drill/exec/store/image/GenericMetadataReader.java
+++ b/contrib/format-image/src/main/java/org/apache/drill/exec/store/image/GenericMetadataReader.java
@@ -145,17 +145,17 @@ public class GenericMetadataReader
           try {
             int numOfComponent = 1;
             int colorType = pngDir.getInt(PngDirectory.TAG_COLOR_TYPE);
-            if (colorType == PngColorType.IndexedColor.getNumericValue()) {
+            if (colorType == PngColorType.INDEXED_COLOR.getNumericValue()) {
               directory.setColorMode("Indexed");
-            } else if (colorType == PngColorType.Greyscale.getNumericValue()) {
+            } else if (colorType == PngColorType.GREYSCALE.getNumericValue()) {
               directory.setColorMode("Grayscale");
-            } else if (colorType == PngColorType.GreyscaleWithAlpha.getNumericValue()) {
+            } else if (colorType == PngColorType.GREYSCALE_WITH_ALPHA.getNumericValue()) {
               numOfComponent = 2;
               directory.setColorMode("Grayscale");
               directory.setAlpha(true);
-            } else if (colorType == PngColorType.TrueColor.getNumericValue()) {
+            } else if (colorType == PngColorType.TRUE_COLOR.getNumericValue()) {
               numOfComponent = 3;
-            } else if (colorType == PngColorType.TrueColorWithAlpha.getNumericValue()) {
+            } else if (colorType == PngColorType.TRUE_COLOR_WITH_ALPHA.getNumericValue()) {
               numOfComponent = 4;
               directory.setAlpha(true);
             }
diff --git a/contrib/format-image/src/test/resources/image/eps.json b/contrib/format-image/src/test/resources/image/eps.json
index 0a5c44102c..4895c2ad3a 100644
--- a/contrib/format-image/src/test/resources/image/eps.json
+++ b/contrib/format-image/src/test/resources/image/eps.json
@@ -62,7 +62,7 @@
     "RenderingIntent" : "Media-Relative Colorimetric",
     "XYZValues" : "0.964 1 0.825",
     "TagCount" : "10",
-    "Copyright" : "(c) 1999 Adobe Systems Inc.",
+    "ProfileCopyright" : "(c) 1999 Adobe Systems Inc.",
     "ProfileDescription" : "GBR",
     "MediaWhitePoint" : "(0.9505, 1, 1.0891)",
     "MediaBlackPoint" : "(0, 0, 0)",
diff --git a/contrib/format-image/src/test/resources/image/jpeg.json b/contrib/format-image/src/test/resources/image/jpeg.json
index 6963a30538..6d590ef854 100644
--- a/contrib/format-image/src/test/resources/image/jpeg.json
+++ b/contrib/format-image/src/test/resources/image/jpeg.json
@@ -141,7 +141,7 @@
     "DeviceModel" : "sRGB",
     "XYZValues" : "0.964 1 0.825",
     "TagCount" : "17",
-    "Copyright" : "Copyright (c) 1998 Hewlett-Packard Company",
+    "ProfileCopyright" : "Copyright (c) 1998 Hewlett-Packard Company",
     "ProfileDescription" : "sRGB IEC61966-2.1",
     "MediaWhitePoint" : "(0.9505, 1, 1.0891)",
     "MediaBlackPoint" : "(0, 0, 0)",
diff --git a/contrib/format-image/src/test/resources/image/mov.json b/contrib/format-image/src/test/resources/image/mov.json
index b3c338f98a..6cb445be6c 100644
--- a/contrib/format-image/src/test/resources/image/mov.json
+++ b/contrib/format-image/src/test/resources/image/mov.json
@@ -1,5 +1,5 @@
 {
-  "Format" : "MOV",
+  "Format" : "QUICKTIME",
   "Duration" : "01:32:3650",
   "PixelWidth" : "560",
   "PixelHeight" : "320",
@@ -31,7 +31,8 @@
     "SelectionTime" : "0",
     "SelectionDuration" : "0",
     "CurrentTime" : "0",
-    "NextTrackID" : "3"
+    "NextTrackID" : "3",
+    "Rotation" : "0"
   },
   "QuickTimeVideo" : {
     "CreationTime" : "Fri Jan 01 00:00:00 +00:00 1904",
diff --git a/contrib/format-image/src/test/resources/image/tiff.json b/contrib/format-image/src/test/resources/image/tiff.json
index 7293452a29..e48a8ceb6b 100644
--- a/contrib/format-image/src/test/resources/image/tiff.json
+++ b/contrib/format-image/src/test/resources/image/tiff.json
@@ -114,7 +114,7 @@
     "DeviceModel" : "sRGB",
     "XYZValues" : "0.964 1 0.825",
     "TagCount" : "17",
-    "Copyright" : "Copyright (c) 1998 Hewlett-Packard Company",
+    "ProfileCopyright" : "Copyright (c) 1998 Hewlett-Packard Company",
     "ProfileDescription" : "sRGB IEC61966-2.1",
     "MediaWhitePoint" : "(0.9505, 1, 1.0891)",
     "MediaBlackPoint" : "(0, 0, 0)",
diff --git a/contrib/format-ltsv/pom.xml b/contrib/format-ltsv/pom.xml
index df26d82413..57ac8efcbc 100644
--- a/contrib/format-ltsv/pom.xml
+++ b/contrib/format-ltsv/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-ltsv</artifactId>
diff --git a/contrib/format-maprdb/pom.xml b/contrib/format-maprdb/pom.xml
index 93af3bdfa6..2b84e00c5b 100644
--- a/contrib/format-maprdb/pom.xml
+++ b/contrib/format-maprdb/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-mapr</artifactId>
diff --git a/contrib/format-pcapng/pom.xml b/contrib/format-pcapng/pom.xml
index 6944fd7775..7d360d1ccf 100644
--- a/contrib/format-pcapng/pom.xml
+++ b/contrib/format-pcapng/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-pcapng</artifactId>
diff --git a/contrib/format-pdf/pom.xml b/contrib/format-pdf/pom.xml
index 2f1310383f..23d779ab11 100644
--- a/contrib/format-pdf/pom.xml
+++ b/contrib/format-pdf/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-pdf</artifactId>
diff --git a/contrib/format-sas/pom.xml b/contrib/format-sas/pom.xml
index 964ac735fd..ffe1deee38 100644
--- a/contrib/format-sas/pom.xml
+++ b/contrib/format-sas/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-sas</artifactId>
diff --git a/contrib/format-sas/src/main/java/org/apache/drill/exec/store/sas/SasBatchReader.java b/contrib/format-sas/src/main/java/org/apache/drill/exec/store/sas/SasBatchReader.java
index ac6d9873b6..0305f8b68c 100644
--- a/contrib/format-sas/src/main/java/org/apache/drill/exec/store/sas/SasBatchReader.java
+++ b/contrib/format-sas/src/main/java/org/apache/drill/exec/store/sas/SasBatchReader.java
@@ -19,8 +19,10 @@
 package org.apache.drill.exec.store.sas;
 
 import com.epam.parso.Column;
+import com.epam.parso.ColumnFormat;
 import com.epam.parso.SasFileProperties;
 import com.epam.parso.SasFileReader;
+import com.epam.parso.impl.DateTimeConstants;
 import com.epam.parso.impl.SasFileReaderImpl;
 import org.apache.drill.common.AutoCloseables;
 import org.apache.drill.common.exceptions.CustomErrorContext;
@@ -163,30 +165,28 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
   private TupleMetadata buildSchema() {
     SchemaBuilder builder = new SchemaBuilder();
     List<Column> columns = sasFileReader.getColumns();
-    int counter = 0;
     for (Column column : columns) {
-      String fieldName = column.getName();
+      String columnName = column.getName();
+      String columnType = column.getType().getSimpleName();
+      ColumnFormat columnFormat = column.getFormat();
       try {
         MinorType type = null;
-        if (firstRow[counter] != null) {
-          type = getType(firstRow[counter].getClass().getSimpleName());
-          if (type == MinorType.BIGINT && !column.getFormat().isEmpty()) {
-            logger.debug("Found possible time");
-            type = MinorType.TIME;
-          }
+        if (DateTimeConstants.TIME_FORMAT_STRINGS.contains(columnFormat.getName())) {
+          type = MinorType.TIME;
+        } else if (DateTimeConstants.DATE_FORMAT_STRINGS.containsKey(columnFormat.getName())) {
+          type = MinorType.DATE;
+        } else if (DateTimeConstants.DATETIME_FORMAT_STRINGS.containsKey(columnFormat.getName())) {
+          type = MinorType.TIMESTAMP;
         } else {
-          // If the first row is null
-          String columnType = column.getType().getSimpleName();
           type = getType(columnType);
         }
-        builder.addNullable(fieldName, type);
+        builder.addNullable(columnName, type);
       } catch (Exception e) {
         throw UserException.dataReadError()
-          .message("Error with column type: " + firstRow[counter].getClass().getSimpleName())
+          .message("Error with type of column " + columnName + "; Type: " + columnType)
           .addContext(errorContext)
           .build(logger);
       }
-      counter++;
     }
 
     return builder.buildSchema();
@@ -199,14 +199,14 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
       MinorType type = field.getType().getMinorType();
       if (type == MinorType.FLOAT8) {
         writerList.add(new DoubleSasColumnWriter(colIndex, fieldName, rowWriter));
-      } else if (type == MinorType.BIGINT) {
-        writerList.add(new BigIntSasColumnWriter(colIndex, fieldName, rowWriter));
       } else if (type == MinorType.DATE) {
         writerList.add(new DateSasColumnWriter(colIndex, fieldName, rowWriter));
       } else if (type == MinorType.TIME) {
         writerList.add(new TimeSasColumnWriter(colIndex, fieldName, rowWriter));
-      } else if (type == MinorType.VARCHAR){
+      } else if (type == MinorType.VARCHAR) {
         writerList.add(new StringSasColumnWriter(colIndex, fieldName, rowWriter));
+      } else if (type == MinorType.TIMESTAMP) {
+        writerList.add(new TimestampSasColumnWriter(colIndex, fieldName, rowWriter));
       } else {
         throw UserException.dataReadError()
           .message(fieldName + " is an unparsable data type: " + type.name() + ".  The SAS reader does not support this data type.")
@@ -221,11 +221,11 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
     switch (simpleType) {
       case "String":
         return MinorType.VARCHAR;
-      case "Numeric":
       case "Double":
-        return MinorType.FLOAT8;
+      case "Number":
+      case "Numeric":
       case "Long":
-        return MinorType.BIGINT;
+        return MinorType.FLOAT8;
       case "Date":
         return MinorType.DATE;
       default:
@@ -366,7 +366,7 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
     @Override
     public void load(Object[] row) {
       if (row[columnIndex] != null) {
-        writer.setString((String) row[columnIndex]);
+        writer.setString(row[columnIndex].toString());
       }
     }
 
@@ -377,18 +377,6 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
     }
   }
 
-  public static class BigIntSasColumnWriter extends SasColumnWriter {
-
-    BigIntSasColumnWriter (int columnIndex, String columnName, RowSetLoader rowWriter) {
-      super(columnIndex, columnName, rowWriter.scalar(columnName));
-    }
-
-    @Override
-    public void load(Object[] row) {
-      writer.setLong((Long) row[columnIndex]);
-    }
-  }
-
   public static class DateSasColumnWriter extends SasColumnWriter {
 
     DateSasColumnWriter (int columnIndex, String columnName, RowSetLoader rowWriter) {
@@ -397,8 +385,10 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
 
     @Override
     public void load(Object[] row) {
-      LocalDate value = convertDateToLocalDate((Date)row[columnIndex]);
-      writer.setDate(value);
+      if (row[columnIndex] != null) {
+        LocalDate value = convertDateToLocalDate((Date)row[columnIndex]);
+        writer.setDate(value);
+      }
     }
 
     public void load(LocalDate date) {
@@ -408,13 +398,13 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
 
   public static class TimeSasColumnWriter extends SasColumnWriter {
 
-    TimeSasColumnWriter (int columnIndex, String columnName, RowSetLoader rowWriter) {
+    TimeSasColumnWriter(int columnIndex, String columnName, RowSetLoader rowWriter) {
       super(columnIndex, columnName, rowWriter.scalar(columnName));
     }
 
     @Override
     public void load(Object[] row) {
-      int seconds = ((Long)row[columnIndex]).intValue();
+      int seconds = ((Long) row[columnIndex]).intValue();
       LocalTime value = LocalTime.parse(formatSeconds(seconds));
       writer.setTime(value);
     }
@@ -453,13 +443,24 @@ public class SasBatchReader implements ManagedReader<FileScanFramework.FileSchem
 
     @Override
     public void load(Object[] row) {
-      // The SAS reader does something strange with zeros. For whatever reason, even if the
-      // field is a floating point number, the value is returned as a long.  This causes class
-      // cast exceptions.
-      if (row[columnIndex].equals(0L)) {
-        writer.setDouble(0.0);
-      } else {
-        writer.setDouble((Double) row[columnIndex]);
+      if (row[columnIndex] != null) {
+        if (row[columnIndex] instanceof Number) {
+          writer.setDouble(((Number) row[columnIndex]).doubleValue());
+        }
+      }
+    }
+  }
+
+  public static class TimestampSasColumnWriter extends SasColumnWriter {
+
+    TimestampSasColumnWriter(int columnIndex, String columnName, RowSetLoader rowWriter) {
+      super(columnIndex, columnName, rowWriter.scalar(columnName));
+    }
+
+    @Override
+    public void load(Object[] row) {
+      if (row[columnIndex] != null) {
+        writer.setTimestamp(((Date) row[columnIndex]).toInstant());
       }
     }
   }
diff --git a/contrib/format-sas/src/test/java/org/apache/drill/exec/store/sas/TestSasReader.java b/contrib/format-sas/src/test/java/org/apache/drill/exec/store/sas/TestSasReader.java
index c007f1a009..be0965ebea 100644
--- a/contrib/format-sas/src/test/java/org/apache/drill/exec/store/sas/TestSasReader.java
+++ b/contrib/format-sas/src/test/java/org/apache/drill/exec/store/sas/TestSasReader.java
@@ -56,7 +56,7 @@ public class TestSasReader extends ClusterTest {
     RowSet results  = client.queryBuilder().sql(sql).rowSet();
 
     TupleMetadata expectedSchema = new SchemaBuilder()
-      .addNullable("x1", MinorType.BIGINT)
+      .addNullable("x1", MinorType.FLOAT8)
       .addNullable("x2", MinorType.FLOAT8)
       .addNullable("x3", MinorType.VARCHAR)
       .addNullable("x4", MinorType.FLOAT8)
@@ -70,13 +70,13 @@ public class TestSasReader extends ClusterTest {
       .addNullable("x12", MinorType.FLOAT8)
       .addNullable("x13", MinorType.FLOAT8)
       .addNullable("x14", MinorType.FLOAT8)
-      .addNullable("x15", MinorType.BIGINT)
-      .addNullable("x16", MinorType.BIGINT)
-      .addNullable("x17", MinorType.BIGINT)
-      .addNullable("x18", MinorType.BIGINT)
-      .addNullable("x19", MinorType.BIGINT)
-      .addNullable("x20", MinorType.BIGINT)
-      .addNullable("x21", MinorType.BIGINT)
+      .addNullable("x15", MinorType.FLOAT8)
+      .addNullable("x16", MinorType.FLOAT8)
+      .addNullable("x17", MinorType.FLOAT8)
+      .addNullable("x18", MinorType.FLOAT8)
+      .addNullable("x19", MinorType.FLOAT8)
+      .addNullable("x20", MinorType.FLOAT8)
+      .addNullable("x21", MinorType.FLOAT8)
       .buildSchema();
 
     RowSet expected = new RowSetBuilder(client.allocator(), expectedSchema)
@@ -122,7 +122,7 @@ public class TestSasReader extends ClusterTest {
     RowSet results  = client.queryBuilder().sql(sql).rowSet();
 
     TupleMetadata expectedSchema = new SchemaBuilder()
-      .addNullable("x1", MinorType.BIGINT)
+      .addNullable("x1", MinorType.FLOAT8)
       .addNullable("x2", MinorType.FLOAT8)
       .addNullable("x3", MinorType.VARCHAR)
       .buildSchema();
diff --git a/contrib/format-spss/pom.xml b/contrib/format-spss/pom.xml
index e9e8a175e6..b45815b3bf 100644
--- a/contrib/format-spss/pom.xml
+++ b/contrib/format-spss/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-spss</artifactId>
diff --git a/contrib/format-syslog/pom.xml b/contrib/format-syslog/pom.xml
index 1392af8eb8..4145d9082d 100644
--- a/contrib/format-syslog/pom.xml
+++ b/contrib/format-syslog/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-syslog</artifactId>
diff --git a/contrib/format-xml/pom.xml b/contrib/format-xml/pom.xml
index 4a4012af12..934b034951 100644
--- a/contrib/format-xml/pom.xml
+++ b/contrib/format-xml/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-format-xml</artifactId>
diff --git a/contrib/pom.xml b/contrib/pom.xml
index 4e722e0cf0..78fc7eaeea 100644
--- a/contrib/pom.xml
+++ b/contrib/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <groupId>org.apache.drill.contrib</groupId>
diff --git a/contrib/storage-cassandra/pom.xml b/contrib/storage-cassandra/pom.xml
index 022f1331e1..c8e60991fc 100644
--- a/contrib/storage-cassandra/pom.xml
+++ b/contrib/storage-cassandra/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-storage-cassandra</artifactId>
@@ -107,7 +107,7 @@
         <artifactId>maven-surefire-plugin</artifactId>
         <configuration>
           <includes>
-            <include>**/TestCassandraSuit.class</include>
+            <include>**/TestCassandraSuite.class</include>
           </includes>
           <excludes>
             <exclude>**/CassandraComplexTypesTest.java</exclude>
diff --git a/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/CassandraColumnConverterFactory.java b/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/CassandraColumnConverterFactory.java
new file mode 100644
index 0000000000..861b83fd1b
--- /dev/null
+++ b/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/CassandraColumnConverterFactory.java
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.cassandra;
+
+import com.datastax.driver.core.Duration;
+import org.apache.drill.common.types.TypeProtos;
+import org.apache.drill.exec.record.ColumnConverter;
+import org.apache.drill.exec.record.ColumnConverterFactory;
+import org.apache.drill.exec.record.metadata.ColumnMetadata;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.TupleWriter;
+import org.apache.drill.exec.vector.accessor.ValueWriter;
+import org.joda.time.Period;
+import org.joda.time.format.PeriodFormatter;
+import org.joda.time.format.PeriodFormatterBuilder;
+
+import java.math.BigDecimal;
+import java.math.BigInteger;
+import java.net.Inet4Address;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
+import java.util.Map;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import java.util.stream.StreamSupport;
+
+public class CassandraColumnConverterFactory extends ColumnConverterFactory {
+
+  private static final PeriodFormatter FORMATTER = new PeriodFormatterBuilder()
+    .appendYears()
+    .appendSuffix("Y")
+    .appendMonths()
+    .appendSuffix("M")
+    .appendWeeks()
+    .appendSuffix("W")
+    .appendDays()
+    .appendSuffix("D")
+    .appendHours()
+    .appendSuffix("H")
+    .appendMinutes()
+    .appendSuffix("M")
+    .appendSecondsWithOptionalMillis()
+    .appendSuffix("S")
+    .toFormatter();
+
+  public CassandraColumnConverterFactory(TupleMetadata providedSchema) {
+    super(providedSchema);
+  }
+
+  @Override
+  public ColumnConverter.ScalarColumnConverter buildScalar(ColumnMetadata readerSchema, ValueWriter writer) {
+    switch (readerSchema.type()) {
+      case INTERVAL:
+        return new ColumnConverter.ScalarColumnConverter(value -> {
+          Duration duration = (Duration) value;
+          writer.setPeriod(Period.parse(duration.toString(), FORMATTER));
+        });
+      case BIGINT:
+        return new ColumnConverter.ScalarColumnConverter(value -> {
+          long longValue;
+          if (value instanceof BigInteger) {
+            longValue = ((BigInteger) value).longValue();
+          } else {
+            longValue = (Long) value;
+          }
+          writer.setLong(longValue);
+        });
+      case VARCHAR:
+        return new ColumnConverter.ScalarColumnConverter(value -> writer.setString(value.toString()));
+      case VARDECIMAL:
+        return new ColumnConverter.ScalarColumnConverter(value -> writer.setDecimal((BigDecimal) value));
+      case VARBINARY:
+        return new ColumnConverter.ScalarColumnConverter(value -> {
+          byte[] bytes;
+          if (value instanceof Inet4Address) {
+            bytes = ((Inet4Address) value).getAddress();
+          } else if (value instanceof UUID) {
+            UUID uuid = (UUID) value;
+            bytes = ByteBuffer.wrap(new byte[16])
+              .order(ByteOrder.BIG_ENDIAN)
+              .putLong(uuid.getMostSignificantBits())
+              .putLong(uuid.getLeastSignificantBits())
+              .array();
+          } else {
+            bytes = (byte[]) value;
+          }
+          writer.setBytes(bytes, bytes.length);
+        });
+      case BIT:
+        return new ColumnConverter.ScalarColumnConverter(value -> writer.setBoolean((Boolean) value));
+      default:
+        return super.buildScalar(readerSchema, writer);
+    }
+  }
+
+  @Override
+  protected ColumnConverter getMapConverter(TupleMetadata providedSchema,
+    TupleMetadata readerSchema, TupleWriter tupleWriter) {
+    Map<String, ColumnConverter> converters = StreamSupport.stream(readerSchema.spliterator(), false)
+      .collect(Collectors.toMap(
+        ColumnMetadata::name,
+        columnMetadata ->
+          getConverter(providedSchema, columnMetadata, tupleWriter.column(columnMetadata.name()))));
+
+    return new CassandraMapColumnConverter(this, providedSchema, tupleWriter, converters);
+  }
+
+  private static class CassandraMapColumnConverter extends ColumnConverter.MapColumnConverter {
+
+    public CassandraMapColumnConverter(ColumnConverterFactory factory, TupleMetadata providedSchema, TupleWriter tupleWriter, Map<String, ColumnConverter> converters) {
+      super(factory, providedSchema, tupleWriter, converters);
+    }
+
+    @Override
+    protected TypeProtos.MinorType getScalarMinorType(Class<?> clazz) {
+      if (clazz == Duration.class) {
+        return TypeProtos.MinorType.INTERVAL;
+      } else if (clazz == Inet4Address.class
+        || clazz == UUID.class) {
+        return TypeProtos.MinorType.VARBINARY;
+      } else if (clazz == java.math.BigInteger.class) {
+        return TypeProtos.MinorType.BIGINT;
+      } else if (clazz == org.apache.calcite.avatica.util.ByteString.class) {
+        return TypeProtos.MinorType.VARCHAR;
+      }
+      return super.getScalarMinorType(clazz);
+    }
+  }
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java b/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/CassandraColumnConverterFactoryProvider.java
similarity index 57%
copy from exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
copy to contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/CassandraColumnConverterFactoryProvider.java
index 83f8f72052..54254ebbac 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
+++ b/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/CassandraColumnConverterFactoryProvider.java
@@ -15,22 +15,17 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.drill.exec.store.enumerable.plan;
+package org.apache.drill.exec.store.cassandra;
 
-import org.apache.calcite.plan.RelOptCluster;
-import org.apache.calcite.rel.RelNode;
+import org.apache.drill.exec.record.ColumnConverterFactory;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.store.enumerable.ColumnConverterFactoryProvider;
 
-import java.util.Map;
+public class CassandraColumnConverterFactoryProvider implements ColumnConverterFactoryProvider {
+  public static final ColumnConverterFactoryProvider INSTANCE = new CassandraColumnConverterFactoryProvider();
 
-public interface EnumerablePrelContext {
-
-  String generateCode(RelOptCluster cluster, RelNode relNode);
-
-  RelNode transformNode(RelNode input);
-
-  Map<String, Integer> getFieldsMap(RelNode transformedNode);
-
-  String getPlanPrefix();
-
-  String getTablePath(RelNode input);
+  @Override
+  public ColumnConverterFactory getFactory(TupleMetadata schema) {
+    return new CassandraColumnConverterFactory(schema);
+  }
 }
diff --git a/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/plan/CassandraEnumerablePrelContext.java b/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/plan/CassandraEnumerablePrelContext.java
index 06cf181a03..91e56088d8 100644
--- a/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/plan/CassandraEnumerablePrelContext.java
+++ b/contrib/storage-cassandra/src/main/java/org/apache/drill/exec/store/cassandra/plan/CassandraEnumerablePrelContext.java
@@ -28,6 +28,8 @@ import org.apache.calcite.rel.core.TableScan;
 import org.apache.calcite.rel.type.RelDataTypeField;
 import org.apache.drill.exec.planner.common.DrillRelOptUtil;
 import org.apache.drill.exec.store.SubsetRemover;
+import org.apache.drill.exec.store.cassandra.CassandraColumnConverterFactoryProvider;
+import org.apache.drill.exec.store.enumerable.ColumnConverterFactoryProvider;
 import org.apache.drill.exec.store.enumerable.plan.EnumerablePrelContext;
 
 import java.util.Collections;
@@ -78,4 +80,9 @@ public class CassandraEnumerablePrelContext implements EnumerablePrelContext {
     List<String> qualifiedName = scan.getTable().getQualifiedName();
     return String.join(".", qualifiedName.subList(0, qualifiedName.size() - 1));
   }
+
+  @Override
+  public ColumnConverterFactoryProvider factoryProvider() {
+    return CassandraColumnConverterFactoryProvider.INSTANCE;
+  }
 }
diff --git a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/BaseCassandraTest.java b/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/BaseCassandraTest.java
index a8eb40f9b2..faae8470bc 100644
--- a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/BaseCassandraTest.java
+++ b/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/BaseCassandraTest.java
@@ -27,8 +27,8 @@ public class BaseCassandraTest extends ClusterTest {
 
   @BeforeClass
   public static void setUpBeforeClass() throws Exception {
-    TestCassandraSuit.initCassandra();
-    initCassandraPlugin(TestCassandraSuit.cassandra);
+    TestCassandraSuite.initCassandra();
+    initCassandraPlugin(TestCassandraSuite.cassandra);
   }
 
   private static void initCassandraPlugin(CassandraContainer<?> cassandra) throws Exception {
@@ -46,8 +46,8 @@ public class BaseCassandraTest extends ClusterTest {
 
   @AfterClass
   public static void tearDownCassandra() {
-    if (TestCassandraSuit.isRunningSuite()) {
-      TestCassandraSuit.tearDownCluster();
+    if (TestCassandraSuite.isRunningSuite()) {
+      TestCassandraSuite.tearDownCluster();
     }
   }
 }
diff --git a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/CassandraQueryTest.java b/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/CassandraQueryTest.java
index e7aecaa0bc..536a1027cd 100644
--- a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/CassandraQueryTest.java
+++ b/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/CassandraQueryTest.java
@@ -18,10 +18,15 @@
 package org.apache.drill.exec.store.cassandra;
 
 import org.apache.drill.common.exceptions.UserRemoteException;
+import org.joda.time.Period;
 import org.junit.Test;
 
 import java.math.BigDecimal;
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.nio.ByteOrder;
 import java.time.LocalDate;
+import java.util.UUID;
 
 import static org.hamcrest.CoreMatchers.containsString;
 import static org.hamcrest.MatcherAssert.assertThat;
@@ -32,21 +37,24 @@ public class CassandraQueryTest extends BaseCassandraTest {
   @Test
   public void testSelectAll() throws Exception {
     testBuilder()
-        .sqlQuery("select * from cassandra.test_keyspace.`employee`")
-        .unOrdered()
+        .sqlQuery("select * from cassandra.test_keyspace.`employee` order by employee_id")
+        .ordered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
-        .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management")
-        .baselineValues(2L, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "M", "M", "Senior Management")
-        .baselineValues(4L, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "S", "M", "Senior Management")
-        .baselineValues(5L, "Maya Gutierrez", "Maya", "Gutierrez", 2, "VP Country Manager", 0, 1, "1951-05-10", "1998-01-01 00:00:00.0", 35000.0f, 1, "Bachelors Degree", "M", "F", "Senior Management")
-        .baselineValues(6L, "Roberta Damstra", "Roberta", "Damstra", 3, "VP Information Systems", 0, 2, "1942-10-08", "1994-12-01 00:00:00.0", 25000.0f, 1, "Bachelors Degree", "M", "F", "Senior Management")
-        .baselineValues(7L, "Rebecca Kanagaki", "Rebecca", "Kanagaki", 4, "VP Human Resources", 0, 3, "1949-03-27", "1994-12-01 00:00:00.0", 15000.0f, 1, "Bachelors Degree", "M", "F", "Senior Management")
-        .baselineValues(8L, "Kim Brunner", "Kim", "Brunner", 11, "Store Manager", 9, 11, "1922-08-10", "1998-01-01 00:00:00.0", 10000.0f, 5, "Bachelors Degree", "S", "F", "Store Management")
-        .baselineValues(9L, "Brenda Blumberg", "Brenda", "Blumberg", 11, "Store Manager", 21, 11, "1979-06-23", "1998-01-01 00:00:00.0", 17000.0f, 5, "Graduate Degree", "M", "F", "Store Management")
-        .baselineValues(10L, "Darren Stanz", "Darren", "Stanz", 5, "VP Finance", 0, 5, "1949-08-26", "1994-12-01 00:00:00.0", 50000.0f, 1, "Partial College", "M", "M", "Senior Management")
-        .baselineValues(11L, "Jonathan Murraiin", "Jonathan", "Murraiin", 11, "Store Manager", 1, 11, "1967-06-20", "1998-01-01 00:00:00.0", 15000.0f, 5, "Graduate Degree", "S", "M", "Store Management")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+            "ascii_field", "blob_field", "boolean_field", "date_field", "decimal_field", "double_field",
+            "duration_field", "inet_field", "time_field", "timestamp_field", "timeuuid_field",
+            "uuid_field", "varchar_field", "varint_field")
+        .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management", "abc", "0000000000000003", true, 15008L, BigDecimal.valueOf(123), 321.123, new Period(0, 0, 0, 3, 0, 0, 0, 320688000), InetAddress.getByName("8.8.8.8").getAddress(), 14700000000000L, 1296705900000L, getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f"), getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f") [...]
+        .baselineValues(2L, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "M", "M", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(4L, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "S", "M", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(5L, "Maya Gutierrez", "Maya", "Gutierrez", 2, "VP Country Manager", 0, 1, "1951-05-10", "1998-01-01 00:00:00.0", 35000.0f, 1, "Bachelors Degree", "M", "F", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(6L, "Roberta Damstra", "Roberta", "Damstra", 3, "VP Information Systems", 0, 2, "1942-10-08", "1994-12-01 00:00:00.0", 25000.0f, 1, "Bachelors Degree", "M", "F", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(7L, "Rebecca Kanagaki", "Rebecca", "Kanagaki", 4, "VP Human Resources", 0, 3, "1949-03-27", "1994-12-01 00:00:00.0", 15000.0f, 1, "Bachelors Degree", "M", "F", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(8L, "Kim Brunner", "Kim", "Brunner", 11, "Store Manager", 9, 11, "1922-08-10", "1998-01-01 00:00:00.0", 10000.0f, 5, "Bachelors Degree", "S", "F", "Store Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(9L, "Brenda Blumberg", "Brenda", "Blumberg", 11, "Store Manager", 21, 11, "1979-06-23", "1998-01-01 00:00:00.0", 17000.0f, 5, "Graduate Degree", "M", "F", "Store Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(10L, "Darren Stanz", "Darren", "Stanz", 5, "VP Finance", 0, 5, "1949-08-26", "1994-12-01 00:00:00.0", 50000.0f, 1, "Partial College", "M", "M", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(11L, "Jonathan Murraiin", "Jonathan", "Murraiin", 11, "Store Manager", 1, 11, "1967-06-20", "1998-01-01 00:00:00.0", 15000.0f, 5, "Graduate Degree", "S", "M", "Store Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
         .go();
   }
 
@@ -76,9 +84,18 @@ public class CassandraQueryTest extends BaseCassandraTest {
         .unOrdered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+            "ascii_field", "blob_field", "boolean_field", "date_field", "decimal_field", "double_field",
+            "duration_field", "inet_field", "time_field", "timestamp_field", "timeuuid_field",
+            "uuid_field", "varchar_field", "varint_field")
         .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26",
-            "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management")
+            "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management",
+            "abc", "0000000000000003", true, 15008L, BigDecimal.valueOf(123), 321.123,
+            new Period(0, 0, 0, 3, 0, 0, 0, 320688000),
+            InetAddress.getByName("8.8.8.8").getAddress(), 14700000000000L, 1296705900000L,
+          getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f"),
+          getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f"),
+            "abc", 123L)
         .go();
   }
 
@@ -144,10 +161,13 @@ public class CassandraQueryTest extends BaseCassandraTest {
         .ordered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
-        .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management")
-        .baselineValues(2L, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "M", "M", "Senior Management")
-        .baselineValues(4L, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "S", "M", "Senior Management")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+            "ascii_field", "blob_field", "boolean_field", "date_field", "decimal_field", "double_field",
+            "duration_field", "inet_field", "time_field", "timestamp_field", "timeuuid_field",
+            "uuid_field", "varchar_field", "varint_field")
+        .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management", "abc", "0000000000000003", true, 15008L, BigDecimal.valueOf(123), 321.123, new Period(0, 0, 0, 3, 0, 0, 0, 320688000), InetAddress.getByName("8.8.8.8").getAddress(), 14700000000000L, 1296705900000L, getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f"), getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f") [...]
+        .baselineValues(2L, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "M", "M", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
+        .baselineValues(4L, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "S", "M", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null)
         .go();
   }
 
@@ -275,10 +295,13 @@ public class CassandraQueryTest extends BaseCassandraTest {
         .baselineColumns("full_name")
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role", "full_name0")
-        .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management", 123)
-        .baselineValues(2L, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "M", "M", "Senior Management", 123)
-        .baselineValues(4L, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "S", "M", "Senior Management", 123)
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+            "ascii_field", "blob_field", "boolean_field", "date_field", "decimal_field", "double_field",
+            "duration_field", "inet_field", "time_field", "timestamp_field", "timeuuid_field",
+            "uuid_field", "varchar_field", "varint_field", "full_name0")
+        .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0f, 0, "Graduate Degree", "S", "F", "Senior Management", "abc", "0000000000000003", true, 15008L, BigDecimal.valueOf(123), 321.123, new Period(0, 0, 0, 3, 0, 0, 0, 320688000), InetAddress.getByName("8.8.8.8").getAddress(), 14700000000000L, 1296705900000L, getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f"), getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f") [...]
+        .baselineValues(2L, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "M", "M", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null, 123)
+        .baselineValues(4L, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0f, 1, "Graduate Degree", "S", "M", "Senior Management", null, null, null, null, null, null, null, null, null, null, null, null, null, null, 123)
         .go();
   }
 
@@ -327,9 +350,27 @@ public class CassandraQueryTest extends BaseCassandraTest {
         .ordered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+            "ascii_field", "blob_field", "boolean_field", "date_field", "decimal_field", "double_field",
+            "duration_field", "inet_field", "time_field", "timestamp_field", "timeuuid_field",
+            "uuid_field", "varchar_field", "varint_field")
         .baselineValues(1L, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, LocalDate.parse("1961-08-26"),
-            "1994-12-01 00:00:00.0", new BigDecimal("80000.00"), 0, "Graduate Degree", "S", "F", "Senior Management")
+            "1994-12-01 00:00:00.0", new BigDecimal("80000.00"), 0, "Graduate Degree", "S", "F", "Senior Management",
+            "abc", "0000000000000003", true, 15008L, BigDecimal.valueOf(123), 321.123,
+            new Period(0, 0, 0, 3, 0, 0, 0, 320688000),
+            InetAddress.getByName("8.8.8.8").getAddress(), 14700000000000L, 1296705900000L,
+          getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f"),
+          getUuidBytes("50554d6e-29bb-11e5-b345-feff819cdc9f"),
+            "abc", 123L)
         .go();
   }
+
+  private static byte[] getUuidBytes(String name) {
+    UUID uuid = UUID.fromString(name);
+    return ByteBuffer.wrap(new byte[16])
+      .order(ByteOrder.BIG_ENDIAN)
+      .putLong(uuid.getMostSignificantBits())
+      .putLong(uuid.getLeastSignificantBits())
+      .array();
+  }
 }
diff --git a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuit.java b/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuite.java
similarity index 94%
copy from contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuit.java
copy to contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuite.java
index 9009ba78d4..98ead70cec 100644
--- a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuit.java
+++ b/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuite.java
@@ -32,7 +32,7 @@ import org.testcontainers.containers.CassandraContainer;
 @Category(SlowTest.class)
 @RunWith(Suite.class)
 @Suite.SuiteClasses({CassandraComplexTypesTest.class, CassandraPlanTest.class, CassandraQueryTest.class})
-public class TestCassandraSuit extends BaseTest {
+public class TestCassandraSuite extends BaseTest {
 
   protected static CassandraContainer<?> cassandra;
 
@@ -42,7 +42,7 @@ public class TestCassandraSuit extends BaseTest {
 
   @BeforeClass
   public static void initCassandra() {
-    synchronized (TestCassandraSuit.class) {
+    synchronized (TestCassandraSuite.class) {
       if (initCount.get() == 0) {
         startCassandra();
       }
@@ -57,7 +57,7 @@ public class TestCassandraSuit extends BaseTest {
 
   @AfterClass
   public static void tearDownCluster() {
-    synchronized (TestCassandraSuit.class) {
+    synchronized (TestCassandraSuite.class) {
       if (initCount.decrementAndGet() == 0 && cassandra != null) {
         cassandra.stop();
       }
diff --git a/contrib/storage-cassandra/src/test/resources/queries.cql b/contrib/storage-cassandra/src/test/resources/queries.cql
index 527d88c3f3..6382c531f4 100644
--- a/contrib/storage-cassandra/src/test/resources/queries.cql
+++ b/contrib/storage-cassandra/src/test/resources/queries.cql
@@ -35,13 +35,27 @@ CREATE TABLE test_keyspace.employee (
     marital_status text,
     gender text,
     management_role text,
+    ascii_field ascii,
+    blob_field blob,
+    boolean_field boolean,
+    date_field date,
+    decimal_field decimal,
+    double_field double,
+    duration_field duration,
+    inet_field inet,
+    time_field time,
+    timestamp_field timestamp,
+    timeuuid_field timeuuid,
+    uuid_field uuid,
+    varchar_field varchar,
+    varint_field varint,
     PRIMARY KEY (full_name, employee_id)
 ) WITH CLUSTERING ORDER BY (employee_id ASC);
 
 USE test_keyspace;
 
-INSERT INTO employee(employee_id, full_name, first_name, last_name, position_id, position_title, store_id, department_id, birth_date, hire_date, salary, supervisor_id, education_level, marital_status, gender, management_role)
- VALUES (1, 'Sheri Nowmer', 'Sheri', 'Nowmer', 1, 'President',0,1, '1961-08-26', '1994-12-01 00:00:00.0',80000.0 ,0, 'Graduate Degree', 'S', 'F', 'Senior Management');
+INSERT INTO employee(employee_id, full_name, first_name, last_name, position_id, position_title, store_id, department_id, birth_date, hire_date, salary, supervisor_id, education_level, marital_status, gender, management_role, ascii_field, blob_field, boolean_field, date_field, decimal_field, double_field, duration_field, inet_field, time_field,  timestamp_field, timeuuid_field, uuid_field, varchar_field, varint_field)
+ VALUES (1, 'Sheri Nowmer', 'Sheri', 'Nowmer', 1, 'President',0,1, '1961-08-26', '1994-12-01 00:00:00.0',80000.0 ,0, 'Graduate Degree', 'S', 'F', 'Senior Management', 'abc', bigintAsBlob(3), true, '2011-02-03', 123.456, 321.123, 3d89h4m48s, '8.8.8.8', '04:05:00',  '2011-02-03 04:05:00', 50554d6e-29bb-11e5-b345-feff819cdc9f, 50554d6e-29bb-11e5-b345-feff819cdc9f, 'abc', 123);
 INSERT INTO employee(employee_id, full_name, first_name, last_name, position_id, position_title, store_id, department_id, birth_date, hire_date, salary, supervisor_id, education_level, marital_status, gender, management_role)
  VALUES (2, 'Derrick Whelply', 'Derrick', 'Whelply', 2, 'VP Country Manager',0,1, '1915-07-03', '1994-12-01 00:00:00.0',40000.0 ,1, 'Graduate Degree', 'M', 'M', 'Senior Management');
 INSERT INTO employee(employee_id, full_name, first_name, last_name, position_id, position_title, store_id, department_id, birth_date, hire_date, salary, supervisor_id, education_level, marital_status, gender, management_role)
diff --git a/contrib/storage-druid/pom.xml b/contrib/storage-druid/pom.xml
index 5b3643effe..e89b60a1b6 100755
--- a/contrib/storage-druid/pom.xml
+++ b/contrib/storage-druid/pom.xml
@@ -22,7 +22,7 @@
     <parent>
         <artifactId>drill-contrib-parent</artifactId>
         <groupId>org.apache.drill.contrib</groupId>
-        <version>1.20.0</version>
+        <version>1.20.1-SNAPSHOT</version>
     </parent>
     <modelVersion>4.0.0</modelVersion>
 
diff --git a/contrib/storage-elasticsearch/pom.xml b/contrib/storage-elasticsearch/pom.xml
index 3cf4ac4004..06102b4f49 100644
--- a/contrib/storage-elasticsearch/pom.xml
+++ b/contrib/storage-elasticsearch/pom.xml
@@ -20,13 +20,10 @@
 -->
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
   <modelVersion>4.0.0</modelVersion>
-  <properties>
-    <test.elasticsearch.version>7.10.1</test.elasticsearch.version>
-  </properties>
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-storage-elasticsearch</artifactId>
@@ -80,6 +77,18 @@
       <version>0.4</version>
       <scope>test</scope>
     </dependency>
+    <dependency>
+      <groupId>org.testcontainers</groupId>
+      <artifactId>elasticsearch</artifactId>
+      <version>${testcontainers.version}</version>
+      <scope>test</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>com.datastax.cassandra</groupId>
+          <artifactId>cassandra-driver-core</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
   </dependencies>
 
   <build>
@@ -88,36 +97,17 @@
         <artifactId>maven-surefire-plugin</artifactId>
         <configuration>
           <forkCount combine.self="override">1</forkCount>
+          <includes>
+            <include>**/TestElasticsearchSuite.class</include>
+          </includes>
+          <excludes>
+            <exclude>**/ElasticComplexTypesTest.java</exclude>
+            <exclude>**/ElasticInfoSchemaTest.java</exclude>
+            <exclude>**/ElasticSearchPlanTest.java</exclude>
+            <exclude>**/ElasticSearchQueryTest.java</exclude>
+          </excludes>
         </configuration>
       </plugin>
-      <plugin>
-        <groupId>com.github.alexcojocaru</groupId>
-        <artifactId>elasticsearch-maven-plugin</artifactId>
-        <version>6.19</version>
-        <configuration>
-          <version>${test.elasticsearch.version}</version>
-          <clusterName>test</clusterName>
-          <transportPort>9300</transportPort>
-          <httpPort>9200</httpPort>
-          <skip>${skipTests}</skip>
-        </configuration>
-        <executions>
-          <execution>
-            <id>start-elasticsearch</id>
-            <phase>process-test-classes</phase>
-            <goals>
-              <goal>runforked</goal>
-            </goals>
-          </execution>
-          <execution>
-            <id>stop-elasticsearch</id>
-            <phase>post-integration-test</phase>
-            <goals>
-              <goal>stop</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
     </plugins>
   </build>
 
diff --git a/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/ElasticsearchColumnConverterFactory.java b/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/ElasticsearchColumnConverterFactory.java
new file mode 100644
index 0000000000..9e2ee712fd
--- /dev/null
+++ b/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/ElasticsearchColumnConverterFactory.java
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.store.elasticsearch;
+
+import org.apache.drill.exec.record.ColumnConverter;
+import org.apache.drill.exec.record.ColumnConverterFactory;
+import org.apache.drill.exec.record.metadata.ColumnMetadata;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.TupleWriter;
+import org.apache.drill.exec.vector.accessor.ValueWriter;
+
+import java.util.Map;
+import java.util.stream.Collectors;
+import java.util.stream.StreamSupport;
+
+public class ElasticsearchColumnConverterFactory extends ColumnConverterFactory {
+
+  public ElasticsearchColumnConverterFactory(TupleMetadata providedSchema) {
+    super(providedSchema);
+  }
+
+  @Override
+  public ColumnConverter.ScalarColumnConverter buildScalar(ColumnMetadata readerSchema, ValueWriter writer) {
+    switch (readerSchema.type()) {
+      case BIT:
+        return new ColumnConverter.ScalarColumnConverter(value -> writer.setBoolean((Boolean) value));
+      default:
+        return super.buildScalar(readerSchema, writer);
+    }
+  }
+
+  @Override
+  protected ColumnConverter getMapConverter(TupleMetadata providedSchema,
+    TupleMetadata readerSchema, TupleWriter tupleWriter) {
+    Map<String, ColumnConverter> converters = StreamSupport.stream(readerSchema.spliterator(), false)
+      .collect(Collectors.toMap(
+        ColumnMetadata::name,
+        columnMetadata ->
+          getConverter(providedSchema, columnMetadata, tupleWriter.column(columnMetadata.name()))));
+
+    return new ElasticMapColumnConverter(this, providedSchema, tupleWriter, converters);
+  }
+
+  private static class ElasticMapColumnConverter extends ColumnConverter.MapColumnConverter {
+
+    public ElasticMapColumnConverter(ColumnConverterFactory factory, TupleMetadata providedSchema, TupleWriter tupleWriter, Map<String, ColumnConverter> converters) {
+      super(factory, providedSchema, tupleWriter, converters);
+    }
+  }
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java b/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/ElasticsearchColumnConverterFactoryProvider.java
similarity index 57%
copy from exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
copy to contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/ElasticsearchColumnConverterFactoryProvider.java
index 83f8f72052..40cce4d25e 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
+++ b/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/ElasticsearchColumnConverterFactoryProvider.java
@@ -15,22 +15,17 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.drill.exec.store.enumerable.plan;
+package org.apache.drill.exec.store.elasticsearch;
 
-import org.apache.calcite.plan.RelOptCluster;
-import org.apache.calcite.rel.RelNode;
+import org.apache.drill.exec.record.ColumnConverterFactory;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.store.enumerable.ColumnConverterFactoryProvider;
 
-import java.util.Map;
+public class ElasticsearchColumnConverterFactoryProvider implements ColumnConverterFactoryProvider {
+  public static final ColumnConverterFactoryProvider INSTANCE = new ElasticsearchColumnConverterFactoryProvider();
 
-public interface EnumerablePrelContext {
-
-  String generateCode(RelOptCluster cluster, RelNode relNode);
-
-  RelNode transformNode(RelNode input);
-
-  Map<String, Integer> getFieldsMap(RelNode transformedNode);
-
-  String getPlanPrefix();
-
-  String getTablePath(RelNode input);
+  @Override
+  public ColumnConverterFactory getFactory(TupleMetadata schema) {
+    return new ElasticsearchColumnConverterFactory(schema);
+  }
 }
diff --git a/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/plan/ElasticSearchEnumerablePrelContext.java b/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/plan/ElasticSearchEnumerablePrelContext.java
index 29db1ca79e..e1a92bb84f 100644
--- a/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/plan/ElasticSearchEnumerablePrelContext.java
+++ b/contrib/storage-elasticsearch/src/main/java/org/apache/drill/exec/store/elasticsearch/plan/ElasticSearchEnumerablePrelContext.java
@@ -27,6 +27,8 @@ import org.apache.calcite.rel.RelNode;
 import org.apache.calcite.rel.type.RelDataTypeField;
 import org.apache.drill.common.expression.SchemaPath;
 import org.apache.drill.exec.store.SubsetRemover;
+import org.apache.drill.exec.store.elasticsearch.ElasticsearchColumnConverterFactoryProvider;
+import org.apache.drill.exec.store.enumerable.ColumnConverterFactoryProvider;
 import org.apache.drill.exec.store.enumerable.plan.EnumerablePrelContext;
 
 import java.util.Collections;
@@ -76,4 +78,9 @@ public class ElasticSearchEnumerablePrelContext implements EnumerablePrelContext
   public String getTablePath(RelNode input) {
     return null;
   }
+
+  @Override
+  public ColumnConverterFactoryProvider factoryProvider() {
+    return ElasticsearchColumnConverterFactoryProvider.INSTANCE;
+  }
 }
diff --git a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticComplexTypesTest.java b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticComplexTypesTest.java
index 43d4681237..9f4f52dafa 100644
--- a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticComplexTypesTest.java
+++ b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticComplexTypesTest.java
@@ -46,18 +46,17 @@ import static org.apache.drill.test.TestBuilder.mapOf;
 
 public class ElasticComplexTypesTest extends ClusterTest {
 
-  private static final String HOST = "http://localhost:9200";
-
   private static final List<String> indexNames = new ArrayList<>();
 
   public static RestHighLevelClient restHighLevelClient;
 
   @BeforeClass
   public static void init() throws Exception {
+    TestElasticsearchSuite.initElasticsearch();
     startCluster(ClusterFixture.builder(dirTestWatcher));
 
     ElasticsearchStorageConfig config = new ElasticsearchStorageConfig(
-        Collections.singletonList(HOST), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
+        Collections.singletonList(TestElasticsearchSuite.getAddress()), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
     config.setEnabled(true);
     cluster.defineStoragePlugin("elastic", config);
 
@@ -69,10 +68,12 @@ public class ElasticComplexTypesTest extends ClusterTest {
     for (String indexName : indexNames) {
       restHighLevelClient.indices().delete(new DeleteIndexRequest(indexName), RequestOptions.DEFAULT);
     }
+    TestElasticsearchSuite.tearDownCluster();
   }
 
   private static void prepareData() throws IOException {
-    restHighLevelClient = new RestHighLevelClient(RestClient.builder(HttpHost.create(HOST)));
+    restHighLevelClient = new RestHighLevelClient(
+      RestClient.builder(HttpHost.create(TestElasticsearchSuite.elasticsearch.getHttpHostAddress())));
 
     String indexName = "arr";
     indexNames.add(indexName);
diff --git a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticInfoSchemaTest.java b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticInfoSchemaTest.java
index cad11bbf7f..aa4dae47b9 100644
--- a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticInfoSchemaTest.java
+++ b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticInfoSchemaTest.java
@@ -41,18 +41,17 @@ import java.util.List;
 
 public class ElasticInfoSchemaTest extends ClusterTest {
 
-  private static final String HOST = "http://localhost:9200";
-
   private static final List<String> indexNames = new ArrayList<>();
 
   public static RestHighLevelClient restHighLevelClient;
 
   @BeforeClass
   public static void init() throws Exception {
+    TestElasticsearchSuite.initElasticsearch();
     startCluster(ClusterFixture.builder(dirTestWatcher));
 
     ElasticsearchStorageConfig config = new ElasticsearchStorageConfig(
-        Collections.singletonList(HOST), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
+        Collections.singletonList(TestElasticsearchSuite.getAddress()), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
     config.setEnabled(true);
     cluster.defineStoragePlugin("elastic", config);
 
@@ -64,10 +63,11 @@ public class ElasticInfoSchemaTest extends ClusterTest {
     for (String indexName : indexNames) {
       restHighLevelClient.indices().delete(new DeleteIndexRequest(indexName), RequestOptions.DEFAULT);
     }
+    TestElasticsearchSuite.tearDownCluster();
   }
 
   private static void prepareData() throws IOException {
-    restHighLevelClient = new RestHighLevelClient(RestClient.builder(HttpHost.create(HOST)));
+    restHighLevelClient = new RestHighLevelClient(RestClient.builder(HttpHost.create(TestElasticsearchSuite.getAddress())));
 
     String indexName = "t1";
     indexNames.add(indexName);
diff --git a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchPlanTest.java b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchPlanTest.java
index 74773af364..db81edf592 100644
--- a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchPlanTest.java
+++ b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchPlanTest.java
@@ -39,18 +39,17 @@ import java.util.Collections;
 
 public class ElasticSearchPlanTest extends ClusterTest {
 
-  private static final String HOST = "http://localhost:9200";
-
   public static RestHighLevelClient restHighLevelClient;
 
   private static String indexName;
 
   @BeforeClass
   public static void init() throws Exception {
+    TestElasticsearchSuite.initElasticsearch();
     startCluster(ClusterFixture.builder(dirTestWatcher));
 
     ElasticsearchStorageConfig config = new ElasticsearchStorageConfig(
-        Collections.singletonList(HOST), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
+        Collections.singletonList(TestElasticsearchSuite.getAddress()), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
     config.setEnabled(true);
     cluster.defineStoragePlugin("elastic", config);
 
@@ -60,10 +59,11 @@ public class ElasticSearchPlanTest extends ClusterTest {
   @AfterClass
   public static void cleanUp() throws IOException {
     restHighLevelClient.indices().delete(new DeleteIndexRequest(indexName), RequestOptions.DEFAULT);
+    TestElasticsearchSuite.tearDownCluster();
   }
 
   private static void prepareData() throws IOException {
-    restHighLevelClient = new RestHighLevelClient(RestClient.builder(HttpHost.create(HOST)));
+    restHighLevelClient = new RestHighLevelClient(RestClient.builder(HttpHost.create(TestElasticsearchSuite.getAddress())));
 
     indexName = "nation";
     CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName);
diff --git a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchQueryTest.java b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchQueryTest.java
index 79f934423f..374c449107 100644
--- a/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchQueryTest.java
+++ b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/ElasticSearchQueryTest.java
@@ -38,6 +38,7 @@ import org.junit.Test;
 import java.io.IOException;
 import java.math.BigDecimal;
 import java.time.LocalDate;
+import java.util.Base64;
 import java.util.Collections;
 
 import static org.hamcrest.CoreMatchers.containsString;
@@ -46,18 +47,17 @@ import static org.junit.Assert.fail;
 
 public class ElasticSearchQueryTest extends ClusterTest {
 
-  private static final String HOST = "http://localhost:9200";
-
   public static RestHighLevelClient restHighLevelClient;
 
   private static String indexName;
 
   @BeforeClass
   public static void init() throws Exception {
+    TestElasticsearchSuite.initElasticsearch();
     startCluster(ClusterFixture.builder(dirTestWatcher));
 
     ElasticsearchStorageConfig config = new ElasticsearchStorageConfig(
-        Collections.singletonList(HOST), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
+        Collections.singletonList(TestElasticsearchSuite.getAddress()), null, null, PlainCredentialsProvider.EMPTY_CREDENTIALS_PROVIDER);
     config.setEnabled(true);
     cluster.defineStoragePlugin("elastic", config);
 
@@ -67,10 +67,11 @@ public class ElasticSearchQueryTest extends ClusterTest {
   @AfterClass
   public static void cleanUp() throws IOException {
     restHighLevelClient.indices().delete(new DeleteIndexRequest(indexName), RequestOptions.DEFAULT);
+    TestElasticsearchSuite.tearDownCluster();
   }
 
   private static void prepareData() throws IOException {
-    restHighLevelClient = new RestHighLevelClient(RestClient.builder(HttpHost.create(HOST)));
+    restHighLevelClient = new RestHighLevelClient(RestClient.builder(HttpHost.create(TestElasticsearchSuite.getAddress())));
 
     indexName = "employee";
     CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName);
@@ -95,6 +96,14 @@ public class ElasticSearchQueryTest extends ClusterTest {
     builder.field("marital_status", "S");
     builder.field("gender", "F");
     builder.field("management_role", "Senior Management");
+    builder.field("binary_field", "Senior Management".getBytes());
+    builder.field("boolean_field", true);
+    builder.timeField("date_field", "2015/01/01 12:10:30");
+    builder.field("byte_field", (byte) 123);
+    builder.field("long_field", 123L);
+    builder.field("float_field", 123F);
+    builder.field("short_field", (short) 123);
+    builder.field("decimal_field", new BigDecimal("123.45"));
     builder.endObject();
     IndexRequest indexRequest = new IndexRequest(indexName).source(builder);
     restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
@@ -307,17 +316,19 @@ public class ElasticSearchQueryTest extends ClusterTest {
         .unOrdered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
-        .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management")
-        .baselineValues(2, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "M", "M", "Senior Management")
-        .baselineValues(4, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "S", "M", "Senior Management")
-        .baselineValues(5, "Maya Gutierrez", "Maya", "Gutierrez", 2, "VP Country Manager", 0, 1, "1951-05-10", "1998-01-01 00:00:00.0", 35000.0, 1, "Bachelors Degree", "M", "F", "Senior Management")
-        .baselineValues(6, "Roberta Damstra", "Roberta", "Damstra", 3, "VP Information Systems", 0, 2, "1942-10-08", "1994-12-01 00:00:00.0", 25000.0, 1, "Bachelors Degree", "M", "F", "Senior Management")
-        .baselineValues(7, "Rebecca Kanagaki", "Rebecca", "Kanagaki", 4, "VP Human Resources", 0, 3, "1949-03-27", "1994-12-01 00:00:00.0", 15000.0, 1, "Bachelors Degree", "M", "F", "Senior Management")
-        .baselineValues(8, "Kim Brunner", "Kim", "Brunner", 11, "Store Manager", 9, 11, "1922-08-10", "1998-01-01 00:00:00.0", 10000.0, 5, "Bachelors Degree", "S", "F", "Store Management")
-        .baselineValues(9, "Brenda Blumberg", "Brenda", "Blumberg", 11, "Store Manager", 21, 11, "1979-06-23", "1998-01-01 00:00:00.0", 17000.0, 5, "Graduate Degree", "M", "F", "Store Management")
-        .baselineValues(10, "Darren Stanz", "Darren", "Stanz", 5, "VP Finance", 0, 5, "1949-08-26", "1994-12-01 00:00:00.0", 50000.0, 1, "Partial College", "M", "M", "Senior Management")
-        .baselineValues(11, "Jonathan Murraiin", "Jonathan", "Murraiin", 11, "Store Manager", 1, 11, "1967-06-20", "1998-01-01 00:00:00.0", 15000.0, 5, "Graduate Degree", "S", "M", "Store Management")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+          "binary_field", "boolean_field", "date_field", "byte_field", "long_field", "float_field",
+          "short_field", "decimal_field")
+        .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management", Base64.getEncoder().encodeToString("Senior Management".getBytes()), true, "2015/01/01 12:10:30", 123, 123, 123., 123, 123.45)
+        .baselineValues(2, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "M", "M", "Senior Management", null, null, null, null, null, null, null, null)
+        .baselineValues(4, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "S", "M", "Senior Management", null, null, null, null, null, null, null, null)
+        .baselineValues(5, "Maya Gutierrez", "Maya", "Gutierrez", 2, "VP Country Manager", 0, 1, "1951-05-10", "1998-01-01 00:00:00.0", 35000.0, 1, "Bachelors Degree", "M", "F", "Senior Management", null, null, null, null, null, null, null, null)
+        .baselineValues(6, "Roberta Damstra", "Roberta", "Damstra", 3, "VP Information Systems", 0, 2, "1942-10-08", "1994-12-01 00:00:00.0", 25000.0, 1, "Bachelors Degree", "M", "F", "Senior Management", null, null, null, null, null, null, null, null)
+        .baselineValues(7, "Rebecca Kanagaki", "Rebecca", "Kanagaki", 4, "VP Human Resources", 0, 3, "1949-03-27", "1994-12-01 00:00:00.0", 15000.0, 1, "Bachelors Degree", "M", "F", "Senior Management", null, null, null, null, null, null, null, null)
+        .baselineValues(8, "Kim Brunner", "Kim", "Brunner", 11, "Store Manager", 9, 11, "1922-08-10", "1998-01-01 00:00:00.0", 10000.0, 5, "Bachelors Degree", "S", "F", "Store Management", null, null, null, null, null, null, null, null)
+        .baselineValues(9, "Brenda Blumberg", "Brenda", "Blumberg", 11, "Store Manager", 21, 11, "1979-06-23", "1998-01-01 00:00:00.0", 17000.0, 5, "Graduate Degree", "M", "F", "Store Management", null, null, null, null, null, null, null, null)
+        .baselineValues(10, "Darren Stanz", "Darren", "Stanz", 5, "VP Finance", 0, 5, "1949-08-26", "1994-12-01 00:00:00.0", 50000.0, 1, "Partial College", "M", "M", "Senior Management", null, null, null, null, null, null, null, null)
+        .baselineValues(11, "Jonathan Murraiin", "Jonathan", "Murraiin", 11, "Store Manager", 1, 11, "1967-06-20", "1998-01-01 00:00:00.0", 15000.0, 5, "Graduate Degree", "S", "M", "Store Management", null, null, null, null, null, null, null, null)
         .go();
   }
 
@@ -347,9 +358,13 @@ public class ElasticSearchQueryTest extends ClusterTest {
         .unOrdered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+            "binary_field", "boolean_field", "date_field", "byte_field", "long_field", "float_field",
+            "short_field", "decimal_field")
         .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26",
-            "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management")
+            "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management",
+          Base64.getEncoder().encodeToString("Senior Management".getBytes()), true,
+          "2015/01/01 12:10:30", 123, 123, 123., 123, 123.45)
         .go();
   }
 
@@ -415,10 +430,12 @@ public class ElasticSearchQueryTest extends ClusterTest {
         .ordered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
-        .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management")
-        .baselineValues(2, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "M", "M", "Senior Management")
-        .baselineValues(4, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "S", "M", "Senior Management")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+          "binary_field", "boolean_field", "date_field", "byte_field", "long_field", "float_field",
+          "short_field", "decimal_field")
+        .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management", Base64.getEncoder().encodeToString("Senior Management".getBytes()), true, "2015/01/01 12:10:30", 123, 123, 123., 123, 123.45)
+        .baselineValues(2, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "M", "M", "Senior Management", null, null, null, null, null, null, null, null)
+        .baselineValues(4, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "S", "M", "Senior Management", null, null, null, null, null, null, null, null)
         .go();
   }
 
@@ -538,10 +555,12 @@ public class ElasticSearchQueryTest extends ClusterTest {
         .baselineColumns("full_name")
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role", "full_name0")
-        .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management", 123)
-        .baselineValues(2, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "M", "M", "Senior Management", 123)
-        .baselineValues(4, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "S", "M", "Senior Management", 123)
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+          "binary_field", "boolean_field", "date_field", "byte_field", "long_field", "float_field",
+          "short_field", "decimal_field", "full_name0")
+        .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, "1961-08-26", "1994-12-01 00:00:00.0", 80000.0, 0, "Graduate Degree", "S", "F", "Senior Management", Base64.getEncoder().encodeToString("Senior Management".getBytes()), true, "2015/01/01 12:10:30", 123, 123, 123., 123, 123.45, 123)
+        .baselineValues(2, "Derrick Whelply", "Derrick", "Whelply", 2, "VP Country Manager", 0, 1, "1915-07-03", "1994-12-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "M", "M", "Senior Management", null, null, null, null, null, null, null, null, 123)
+        .baselineValues(4, "Michael Spence", "Michael", "Spence", 2, "VP Country Manager", 0, 1, "1969-06-20", "1998-01-01 00:00:00.0", 40000.0, 1, "Graduate Degree", "S", "M", "Senior Management", null, null, null, null, null, null, null, null, 123)
         .go();
   }
 
@@ -590,9 +609,13 @@ public class ElasticSearchQueryTest extends ClusterTest {
         .ordered()
         .baselineColumns("employee_id", "full_name", "first_name", "last_name", "position_id",
             "position_title", "store_id", "department_id", "birth_date", "hire_date", "salary",
-            "supervisor_id", "education_level", "marital_status", "gender", "management_role")
+            "supervisor_id", "education_level", "marital_status", "gender", "management_role",
+          "binary_field", "boolean_field", "date_field", "byte_field", "long_field", "float_field",
+          "short_field", "decimal_field")
         .baselineValues(1, "Sheri Nowmer", "Sheri", "Nowmer", 1, "President", 0, 1, LocalDate.parse("1961-08-26"),
-            "1994-12-01 00:00:00.0", new BigDecimal("80000.00"), 0, "Graduate Degree", "S", "F", "Senior Management")
+            "1994-12-01 00:00:00.0", new BigDecimal("80000.00"), 0, "Graduate Degree", "S", "F", "Senior Management",
+          Base64.getEncoder().encodeToString("Senior Management".getBytes()), true,
+          "2015/01/01 12:10:30", 123, 123, 123., 123, 123.45)
         .go();
   }
 
diff --git a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuit.java b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/TestElasticsearchSuite.java
similarity index 57%
rename from contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuit.java
rename to contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/TestElasticsearchSuite.java
index 9009ba78d4..dec9b7659e 100644
--- a/contrib/storage-cassandra/src/test/java/org/apache/drill/exec/store/cassandra/TestCassandraSuit.java
+++ b/contrib/storage-elasticsearch/src/test/java/org/apache/drill/exec/store/elasticsearch/TestElasticsearchSuite.java
@@ -15,10 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.drill.exec.store.cassandra;
-
-import java.time.Duration;
-import java.util.concurrent.atomic.AtomicInteger;
+package org.apache.drill.exec.store.elasticsearch;
 
 import org.apache.drill.categories.SlowTest;
 import org.apache.drill.test.BaseTest;
@@ -27,24 +24,28 @@ import org.junit.BeforeClass;
 import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Suite;
-import org.testcontainers.containers.CassandraContainer;
+import org.testcontainers.elasticsearch.ElasticsearchContainer;
+import org.testcontainers.utility.DockerImageName;
+
+import java.time.Duration;
+import java.util.concurrent.atomic.AtomicInteger;
 
 @Category(SlowTest.class)
 @RunWith(Suite.class)
-@Suite.SuiteClasses({CassandraComplexTypesTest.class, CassandraPlanTest.class, CassandraQueryTest.class})
-public class TestCassandraSuit extends BaseTest {
+@Suite.SuiteClasses({ElasticComplexTypesTest.class, ElasticInfoSchemaTest.class, ElasticSearchPlanTest.class, ElasticSearchQueryTest.class})
+public class TestElasticsearchSuite extends BaseTest {
 
-  protected static CassandraContainer<?> cassandra;
+  protected static ElasticsearchContainer elasticsearch;
 
   private static final AtomicInteger initCount = new AtomicInteger(0);
 
   private static volatile boolean runningSuite = false;
 
   @BeforeClass
-  public static void initCassandra() {
-    synchronized (TestCassandraSuit.class) {
+  public static void initElasticsearch() {
+    synchronized (TestElasticsearchSuite.class) {
       if (initCount.get() == 0) {
-        startCassandra();
+        startElasticsearch();
       }
       initCount.incrementAndGet();
       runningSuite = true;
@@ -57,19 +58,22 @@ public class TestCassandraSuit extends BaseTest {
 
   @AfterClass
   public static void tearDownCluster() {
-    synchronized (TestCassandraSuit.class) {
-      if (initCount.decrementAndGet() == 0 && cassandra != null) {
-        cassandra.stop();
+    synchronized (TestElasticsearchSuite.class) {
+      if (initCount.decrementAndGet() == 0 && elasticsearch != null) {
+        elasticsearch.stop();
       }
     }
   }
 
-  private static void startCassandra() {
-    cassandra = new CassandraContainer<>("cassandra")
-      .withInitScript("queries.cql")
-      .withStartupTimeout(Duration.ofMinutes(2))
-      .withEnv("CASSANDRA_SNITCH", "GossipingPropertyFileSnitch") // Tune Cassandra options for faster startup
-      .withEnv("JVM_OPTS", "-Dcassandra.skip_wait_for_gossip_to_settle=0 -Dcassandra.initial_token=0");
-    cassandra.start();
+  private static void startElasticsearch() {
+    DockerImageName imageName = DockerImageName.parse("elasticsearch:7.14.2")
+      .asCompatibleSubstituteFor("docker.elastic.co/elasticsearch/elasticsearch");
+    TestElasticsearchSuite.elasticsearch = new ElasticsearchContainer(imageName)
+      .withStartupTimeout(Duration.ofMinutes(2));
+    TestElasticsearchSuite.elasticsearch.start();
+  }
+
+  public static String getAddress() {
+    return elasticsearch.getHttpHostAddress();
   }
 }
diff --git a/contrib/storage-hbase/pom.xml b/contrib/storage-hbase/pom.xml
index 22ee571bdb..ebad66b75d 100644
--- a/contrib/storage-hbase/pom.xml
+++ b/contrib/storage-hbase/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-storage-hbase</artifactId>
diff --git a/contrib/storage-hive/core/pom.xml b/contrib/storage-hive/core/pom.xml
index 949f99be11..795f130cae 100644
--- a/contrib/storage-hive/core/pom.xml
+++ b/contrib/storage-hive/core/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <groupId>org.apache.drill.contrib.storage-hive</groupId>
     <artifactId>drill-contrib-storage-hive-parent</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-storage-hive-core</artifactId>
diff --git a/contrib/storage-hive/hive-exec-shade/pom.xml b/contrib/storage-hive/hive-exec-shade/pom.xml
index 2a6909ba23..3486b449c9 100644
--- a/contrib/storage-hive/hive-exec-shade/pom.xml
+++ b/contrib/storage-hive/hive-exec-shade/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <groupId>org.apache.drill.contrib.storage-hive</groupId>
     <artifactId>drill-contrib-storage-hive-parent</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-hive-exec-shaded</artifactId>
diff --git a/contrib/storage-hive/pom.xml b/contrib/storage-hive/pom.xml
index 9d65d859a4..ffb1012512 100644
--- a/contrib/storage-hive/pom.xml
+++ b/contrib/storage-hive/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <groupId>org.apache.drill.contrib</groupId>
     <artifactId>drill-contrib-parent</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <groupId>org.apache.drill.contrib.storage-hive</groupId>
diff --git a/contrib/storage-http/pom.xml b/contrib/storage-http/pom.xml
index 8a643bc079..6fff90fe04 100644
--- a/contrib/storage-http/pom.xml
+++ b/contrib/storage-http/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-storage-http</artifactId>
diff --git a/contrib/storage-jdbc/pom.xml b/contrib/storage-jdbc/pom.xml
index 2b07e821f1..e69ada8b10 100644
--- a/contrib/storage-jdbc/pom.xml
+++ b/contrib/storage-jdbc/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-jdbc-storage</artifactId>
diff --git a/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcCatalogSchema.java b/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcCatalogSchema.java
index 52721f6936..dce76aaebe 100644
--- a/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcCatalogSchema.java
+++ b/contrib/storage-jdbc/src/main/java/org/apache/drill/exec/store/jdbc/JdbcCatalogSchema.java
@@ -55,6 +55,11 @@ class JdbcCatalogSchema extends AbstractSchema {
       connectionSchemaName = con.getSchema();
       while (set.next()) {
         final String catalogName = set.getString(1);
+        if (catalogName == null) {
+          // DB2 is an example of why of this escape is needed.
+          continue;
+        }
+
         CapitalizingJdbcSchema schema = new CapitalizingJdbcSchema(
             getSchemaPath(), catalogName, source, dialect, convention, catalogName, null, caseSensitive);
         schemaMap.put(schema.getName(), schema);
diff --git a/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithPostgres.java b/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithPostgres.java
index b31cff7782..dd11ce5beb 100644
--- a/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithPostgres.java
+++ b/contrib/storage-jdbc/src/test/java/org/apache/drill/exec/store/jdbc/TestJdbcPluginWithPostgres.java
@@ -313,4 +313,16 @@ public class TestJdbcPluginWithPostgres extends ClusterTest {
       .exclude("Limit\\(")
       .match();
   }
+
+  @Test // DRILL-8013
+  public void testAvgFunction() throws Exception {
+    String query = "select avg(person_id) `avg` from pg.`public`.person";
+
+    testBuilder()
+      .sqlQuery(query)
+      .unOrdered()
+      .baselineColumns("avg")
+      .baselineValues(2.75)
+      .go();
+  }
 }
diff --git a/contrib/storage-kafka/pom.xml b/contrib/storage-kafka/pom.xml
index cf87505cd4..d1eec61366 100644
--- a/contrib/storage-kafka/pom.xml
+++ b/contrib/storage-kafka/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-storage-kafka</artifactId>
diff --git a/contrib/storage-kudu/pom.xml b/contrib/storage-kudu/pom.xml
index 256afcd605..72b550f59d 100644
--- a/contrib/storage-kudu/pom.xml
+++ b/contrib/storage-kudu/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-kudu-storage</artifactId>
diff --git a/contrib/storage-mongo/pom.xml b/contrib/storage-mongo/pom.xml
index e9e539e4e1..d834171a29 100644
--- a/contrib/storage-mongo/pom.xml
+++ b/contrib/storage-mongo/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-mongo-storage</artifactId>
diff --git a/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/plan/MongoPluginImplementor.java b/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/plan/MongoPluginImplementor.java
index 64c9b4e24b..b55fa6b173 100644
--- a/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/plan/MongoPluginImplementor.java
+++ b/contrib/storage-mongo/src/main/java/org/apache/drill/exec/store/mongo/plan/MongoPluginImplementor.java
@@ -40,10 +40,12 @@ import org.apache.drill.exec.planner.common.DrillLimitRelBase;
 import org.apache.drill.exec.planner.logical.DrillOptiq;
 import org.apache.drill.exec.planner.logical.DrillParseContext;
 import org.apache.drill.exec.planner.physical.PrelUtil;
+import org.apache.drill.exec.store.StoragePlugin;
 import org.apache.drill.exec.store.mongo.MongoAggregateUtils;
 import org.apache.drill.exec.store.mongo.MongoFilterBuilder;
 import org.apache.drill.exec.store.mongo.MongoGroupScan;
 import org.apache.drill.exec.store.mongo.MongoScanSpec;
+import org.apache.drill.exec.store.mongo.MongoStoragePlugin;
 import org.apache.drill.exec.store.plan.AbstractPluginImplementor;
 import org.apache.drill.exec.store.plan.PluginImplementor;
 import org.apache.drill.exec.store.plan.rel.PluginAggregateRel;
@@ -282,6 +284,11 @@ public class MongoPluginImplementor extends AbstractPluginImplementor {
     return hasPluginGroupScan(scan);
   }
 
+  @Override
+  protected Class<? extends StoragePlugin> supportedPlugin() {
+    return MongoStoragePlugin.class;
+  }
+
   @Override
   public GroupScan getPhysicalOperator() {
     MongoScanSpec scanSpec = groupScan.getScanSpec();
diff --git a/contrib/storage-opentsdb/pom.xml b/contrib/storage-opentsdb/pom.xml
index d004658290..0e06a76189 100644
--- a/contrib/storage-opentsdb/pom.xml
+++ b/contrib/storage-opentsdb/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>drill-contrib-parent</artifactId>
         <groupId>org.apache.drill.contrib</groupId>
-        <version>1.20.0</version>
+        <version>1.20.1-SNAPSHOT</version>
     </parent>
 
     <artifactId>drill-opentsdb-storage</artifactId>
diff --git a/contrib/storage-phoenix/README.md b/contrib/storage-phoenix/README.md
index 01278b5a20..fa8467fe04 100644
--- a/contrib/storage-phoenix/README.md
+++ b/contrib/storage-phoenix/README.md
@@ -103,7 +103,7 @@ requires a recompilation of HBase because of incompatible changes between Hadoop
 
  1. Download HBase 2.4.2 sources and rebuild with Hadoop 3.
 
-    ```mvn clean install -DskipTests -Dhadoop.profile=3.0 -Dhadoop-three.version=3.2.2```
+    ```mvn clean install -DskipTests -Dhadoop.profile=3.0 -Dhadoop-three.version=3.2.3```
 
  2. Remove the `Ignore` annotation in `PhoenixTestSuite.java`.
     
diff --git a/contrib/storage-phoenix/pom.xml b/contrib/storage-phoenix/pom.xml
index 524e1d9c74..2eed9d1738 100644
--- a/contrib/storage-phoenix/pom.xml
+++ b/contrib/storage-phoenix/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <groupId>org.apache.drill.contrib</groupId>
     <artifactId>drill-contrib-parent</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <artifactId>drill-storage-phoenix</artifactId>
   <name>Drill : Contrib : Storage : Phoenix</name>
diff --git a/contrib/storage-splunk/pom.xml b/contrib/storage-splunk/pom.xml
index 6d72100674..0132dc6c3a 100644
--- a/contrib/storage-splunk/pom.xml
+++ b/contrib/storage-splunk/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-storage-splunk</artifactId>
diff --git a/contrib/udfs/pom.xml b/contrib/udfs/pom.xml
index 2900a28312..4aeb62fd12 100644
--- a/contrib/udfs/pom.xml
+++ b/contrib/udfs/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-contrib-parent</artifactId>
     <groupId>org.apache.drill.contrib</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-udfs</artifactId>
diff --git a/distribution/pom.xml b/distribution/pom.xml
index e4f943d477..c996212874 100644
--- a/distribution/pom.xml
+++ b/distribution/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>distribution</artifactId>
@@ -31,7 +31,7 @@
   <name>Drill : Packaging and Distribution Assembly</name>
 
   <properties>
-    <aws.java.sdk.version>1.11.375</aws.java.sdk.version>
+    <aws.java.sdk.version>1.12.211</aws.java.sdk.version>
     <oci.hdfs.version>3.3.0.7.0.1</oci.hdfs.version>
   </properties>
 
diff --git a/distribution/src/main/resources/winutils/hadoop.dll b/distribution/src/main/resources/winutils/hadoop.dll
index 441d3edd7d..763c40acc4 100644
Binary files a/distribution/src/main/resources/winutils/hadoop.dll and b/distribution/src/main/resources/winutils/hadoop.dll differ
diff --git a/distribution/src/main/resources/winutils/winutils.exe b/distribution/src/main/resources/winutils/winutils.exe
index 75be699559..b2c4819bf7 100644
Binary files a/distribution/src/main/resources/winutils/winutils.exe and b/distribution/src/main/resources/winutils/winutils.exe differ
diff --git a/docs/dev/HadoopWinutils.md b/docs/dev/HadoopWinutils.md
index a9ead275ad..aab93fa06e 100644
--- a/docs/dev/HadoopWinutils.md
+++ b/docs/dev/HadoopWinutils.md
@@ -3,7 +3,7 @@
 Hadoop Winutils native libraries are required to run Drill on Windows. The last version present in maven repository is 2.7.1 and is not updated anymore.
 That's why Winutils version matching Hadoop version used in Drill is located in distribution/src/main/resources.
 
-Current Winutils version: *3.2.2.*
+Current Winutils version: *3.2.3.*
 
 ## References
 - Official wiki: [Windows Problems](https://cwiki.apache.org/confluence/display/HADOOP2/WindowsProblems).
diff --git a/docs/dev/Release.md b/docs/dev/Release.md
index c08ed39fb3..4c6da335ab 100644
--- a/docs/dev/Release.md
+++ b/docs/dev/Release.md
@@ -35,9 +35,9 @@
     1. ### SVN
         1. Install subversion client (see instructions on http://subversion.apache.org/packages.html#osx for
          installing svn on different systems).
-        2. Check that svn works:
+        2. Check that svn works and obtain a working copy of dist/dev/drill:
         ```
-        svn co https://dist.apache.org/repos/dist/release/drill ~/src/release/drill-dist
+        svn co https://dist.apache.org/repos/dist/dev/drill ~/src/release/drill-dist-dev
         ```
         You also need writable access to Apache SVN. (You need to be a PMC member for this).
     2. ### GPG key:
@@ -58,7 +58,7 @@
             gpg --allow-secret-key-import --import mygpgkey_sec.gpg
             ```
         5. Have another committer signed your key (add to the trust chain).
-            Apache advises to do it at 
+            Apache advises to do it at
             [key signing parties](https://www.apache.org/dev/release-signing.html#key-signing-party).
         6. Make sure the default key is the key generated using the Apache email.
         7. Publish your public key to a public server (e.g. http://pgp.surfnet.nl or http://pgp.mit.edu)
@@ -77,7 +77,7 @@
         3. Note that you can add more than one SSH key corresponding to multiple machines.
         4. Enter your Apache password and submit the changes.
         5. Verify that you can do an sftp to the Apache server by running the following: `sftp <username>@home.apache.org`.
-    4. ### Setup Maven
+    4. ### Set up Maven
         1. Apache's Maven repository access is documented here:
             http://www.apache.org/dev/publishing-maven-artifacts.html
             http://www.apache.org/dev/publishing-maven-artifacts.html#dev-env.
@@ -130,16 +130,16 @@
         ```
     9. Do the release preparation:
         ```
-        mvn -X release:prepare -Papache-release -DpushChanges=false -DskipTests -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE}  -DskipTests=true -Dmaven.javadoc.skip=false" -DreleaseVersion=1.17.0 -DdevelopmentVersion=1.18.0-SNAPSHOT -Dtag=drill-1.17.0
+        mvn -X release:prepare -Papache-release -DpushChanges=false -DskipTests -DreleaseVersion=1.17.0 -DdevelopmentVersion=1.18.0-SNAPSHOT -Dtag=drill-1.17.0 -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE}  -DskipTests=true -Dmaven.javadoc.skip=false [-Pfoo_profile]"
         ```
     10. Make sure to change Drill version to the proper one.
     11. Check that `target` folder contains the following files (with the correct version number):
         ```
-        apache-drill-1.17.0-src.tar.gz 
+        apache-drill-1.17.0-src.tar.gz
         apache-drill-1.17.0-src.tar.gz.asc
         apache-drill-1.17.0-src.tar.gz.sha512
         apache-drill-1.17.0-src.zip
-        apache-drill-1.17.0-src.zip.asc 
+        apache-drill-1.17.0-src.zip.asc
         apache-drill-1.17.0-src.zip.sha512
         ```
     12. Verify signature, ensure that GPG key for Apache was used (see details at
@@ -167,7 +167,7 @@
         ```
         If you want to additionally check resulting archives and jars, add `-Dmaven.deploy.skip=true` flag to avoid deploying jars to the Nexus repository:
         ```
-        mvn release:perform -DconnectionUrl=scm:git:git@github.com:vvysotskyi/drill.git -DskipTests -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests=true -DconnectionUrl=scm:git:git@github.com:vvysotskyi/drill.git -Dmaven.deploy.skip=true"
+        mvn release:perform -DconnectionUrl=scm:git:git@github.com:vvysotskyi/drill.git -DskipTests -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests=true -DconnectionUrl=scm:git:git@github.com:vvysotskyi/drill.git -Dmaven.deploy.skip=true [-Pfoo_profile]"
         ```
         After checks are performed, run this command without the flag.
     15. Deploy the release commit:
@@ -175,21 +175,23 @@
         git checkout drill-1.17.0
         mvn deploy -Papache-release -DskipTests -Dgpg.passphrase=${GPG_PASSPHRASE}
         ```
-    16. Copy release files to a local release staging directory:
+    16. Copy release files to your svn working copy of dist.apache.org/repos/dist/dev/drill:
         ```
-        cp ~/src/release/drill/target/target/checkout/apache-drill-1.17.0-src.tar.gz* ~/release/1.17.0-rc0/ && \ 
-        cp ~/src/release/drill/target/target/checkout/apache-drill-1.17.0.zip* ~/release/1.17.0-rc0/ \ 
-        cp ~/src/release/drill/target/checkout/distribution/target/apache-drill-1.17.0.tar.gz* ~/release/1.17.0-rc0/ \ 
+        cp ~/src/release/drill/target/target/checkout/apache-drill-1.17.0-src.tar.gz* ~/src/release/drill-dist-dev/1.17.0-rc0/ && \
+        cp ~/src/release/drill/target/target/checkout/apache-drill-1.17.0.zip* ~/src/release/drill-dist-dev/1.17.0-rc0/ \
+        cp ~/src/release/drill/target/checkout/distribution/target/apache-drill-1.17.0.tar.gz* ~/src/release/drill-dist-dev/1.17.0-rc0/ \
         ```
     17. Check if the artifacts are signed properly:
         ```
-        ./tools/release-scripts/checksum.sh ~/release/1.17.0-rc0/apache-drill-1.17.0-src.tar.gz
-        ./tools/release-scripts/checksum.sh ~/release/1.17.0-rc0/apache-drill-1.17.0-src.zip
-        ./tools/release-scripts/checksum.sh ~/release/1.17.0-rc0/apache-drill-1.17.0.tar.gz
+        ./tools/release-scripts/checksum.sh ~/src/release/drill-dist-dev/1.17.0-rc0/apache-drill-1.17.0-src.tar.gz
+        ./tools/release-scripts/checksum.sh ~/src/release/drill-dist-dev/1.17.0-rc0/apache-drill-1.17.0-src.zip
+        ./tools/release-scripts/checksum.sh ~/src/release/drill-dist-dev/1.17.0-rc0/apache-drill-1.17.0.tar.gz
         ```
-    18. Copy release files to a directory on `home.apache.org` for voting:
+    18. Commit the release files and browse to https://dist.apache.org/repos/dist/dev/drill/ to check that they are accessible. This can only done by a PMC member:
         ```
-        scp ~/release/1.17.0-rc0/* <username>@home.apache.org:~/public_html/drill/releases/1.17.0/rc0
+        cd ~/src/release/drill-dist-dev
+        svn add 1.17.0-rc0
+        svn commit
         ```
 
 4. ## Automated release process
@@ -198,9 +200,18 @@
     tools/release-scripts/release.sh
     ```
     The release script will push the maven artifacts to the Maven staging repo.
+
+5. ## Multiple builds
+    Currently, releasing multiple builds is done by performing consecutive releases.  E.g. to add
+    an Hadoop 2 build of Drill,  loop back to the top of the instructions now and start again with
+    a build profile of 'hadoop-2' and a release version of '1.17.0-hadoop2'.  Note that it is not
+    necessary to close the jar release in the Maven repo first since the artifacts from your next
+    release will cause no identifier collisions due to their different version suffix.  This means
+    that a single Maven repo can hold the artifacts for both 1.17.0 and 1.17.0-hadoop2.
+
 5. ## Publish release candidate and vote
     1. Go to the [Apache Maven staging repo](https://repository.apache.org/) and close the new jar release.
-        This step is done in the Maven GUI. For detailed instructions on sonatype GUI please refer to 
+        This step is done in the Maven GUI. For detailed instructions on sonatype GUI please refer to
         https://central.sonatype.org/pages/releasing-the-deployment.html#locate-and-examine-your-staging-repository.
     2. Start vote (vote should last at least 72 hours).
 
@@ -236,7 +247,7 @@
 
 
         [1] https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820&version=12341087
-        [2] http://home.apache.org/~arina/drill/releases/1.12.0/rc0/
+        [2] http://home.apache.org/~arina/drill/releases/1.12.0-rc0/
         [3] https://repository.apache.org/content/repositories/orgapachedrill-1043/
         [4] https://github.com/arina-ielchiieva/drill/commits/drill-1.12.0
         ```
@@ -280,7 +291,7 @@
 
             3x +1 (binding): Arina, Aman, Parth
 
-            5x +1 (non-binding): Vitalii, Holger, Prasad, Vova, Charles 
+            5x +1 (non-binding): Vitalii, Holger, Prasad, Vova, Charles
 
             No 0s or -1s.
 
@@ -288,17 +299,17 @@
 
             Kind regards
             ```
-        2. Add the release to the [dist.apache.org](https://dist.apache.org/repos/dist/release/drill/) and delete the old version, keeping two most recent.
+        2. Move the release files to the Drill release directory with an entirely remote `svn move` operation and delete the old version, keeping two most recent.
             This can only done by a PMC member:
             ```
-            svn co https://dist.apache.org/repos/dist/release/drill ~/src/release/drill-dist
-            cd ~/src/release/drill-dist
-            mkdir drill-1.17.0
-            cp -r ~/release/1.17.0-rc0 drill-1.17.0
-            svn add drill-1.17.0
-            svn commit --message "Upload Apache Drill 1.17.0 release."
-            svn delete 1.15.0
-            svn commit --message "Deleting drill-1.15.0 to keep only last two versions"
+            svn move \
+              -m "Upload Apache Drill 1.17.0 release." \
+              https://dist.apache.org/repos/dist/dev/drill/drill-1.17.0-rc0 \
+              https://dist.apache.org/repos/dist/release/drill/drill-1.17.0
+
+            svn delete \
+              -m "Deleting drill-1.15.0 to keep only last two versions" \
+              https://dist.apache.org/repos/dist/release/drill/drill-1.15.0
             ```
         3. Go to the [Apache Maven staging repo](https://repository.apache.org/) and promote the release to the production.
         4. Create branch and tag for this release and update Drill version in master (if used automated scripts, tag will be like this `drill-1.11.0`. Branch should be named as `1.11.0`).
@@ -309,7 +320,7 @@
         6. Post release:
             1. "What's New" for the new release.
             2. Update Apache JIRA and add release date for this release. Add a new release tag if not already there.
-            3. Update Drill Web site:
+            3. Update Drill Web site through its source repo, https://github.com/apache/drill-site.
                 1. Generate release notes for Drill: https://confluence.atlassian.com/jira/creating-release-notes-185729647.html
                     and create a MarkDown file for the release notes - post to the site the day of the release.
                 2. Write the blog post and push it out to the Apache Drill website so it can be referenced in the announcement.
@@ -346,7 +357,7 @@
                         ```
                     6. Commit changes with the commit message `Publish JavaDocs for the Apache Drill 1.17.0`
                 5. Instructions how to build and deploy Web site may be found here:
-                    https://github.com/apache/drill/blob/gh-pages/README.md
+                    https://github.com/apache/drill-site
             3. Post the announcement about new release on [Apache Drill Twitter](https://twitter.com/apachedrill]).
             4. A PMC member needs to update the release date for new release here:
                 https://reporter.apache.org/addrelease.html?drill
diff --git a/drill-yarn/pom.xml b/drill-yarn/pom.xml
index aa1cd8f408..3d1d59c27e 100644
--- a/drill-yarn/pom.xml
+++ b/drill-yarn/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-yarn</artifactId>
diff --git a/exec/java-exec/pom.xml b/exec/java-exec/pom.xml
index 15e84c2bb8..a5f989f892 100644
--- a/exec/java-exec/pom.xml
+++ b/exec/java-exec/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>exec-parent</artifactId>
     <groupId>org.apache.drill.exec</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <artifactId>drill-java-exec</artifactId>
   <name>Drill : Exec : Java Execution Engine</name>
@@ -219,12 +219,10 @@
     <dependency>
       <groupId>com.fasterxml.jackson.jaxrs</groupId>
       <artifactId>jackson-jaxrs-json-provider</artifactId>
-      <version>${jackson.version}</version>
     </dependency>
     <dependency>
       <groupId>com.fasterxml.jackson.module</groupId>
       <artifactId>jackson-module-afterburner</artifactId>
-      <version>${jackson.version}</version>
     </dependency>
     <dependency>
       <groupId>org.honton.chas.hocon</groupId>
@@ -652,14 +650,6 @@
       <version>${testcontainers.version}</version>
       <scope>test</scope>
     </dependency>
-    <dependency>
-      <groupId>com.github.rdblue</groupId>
-      <artifactId>brotli-codec</artifactId>
-      <version>0.1.1</version>
-      <!-- brotli-codec bundles natives for linux and darwin, amd64 only so
-      we don't ship it so as not to break startup on windows or arm -->
-      <scope>provided</scope>
-    </dependency>
   </dependencies>
 
   <profiles>
@@ -769,8 +759,41 @@
         </dependency>
       </dependencies>
     </profile>
+    <profile>
+      <!-- Only include a Brotli codec in test scope on AMD64 and x86_64, see PARQUET-1975 -->
+      <id>brotli-amd64</id>
+      <activation>
+        <os>
+          <arch>amd64</arch>
+        </os>
+      </activation>
+      <dependencies>
+        <dependency>
+          <groupId>com.github.rdblue</groupId>
+          <artifactId>brotli-codec</artifactId>
+          <version>0.1.1</version>
+          <!-- this codec is not shipped because it breaks startup on unsupported platforms -->
+          <scope>test</scope>
+        </dependency>
+      </dependencies>
+    </profile>
+    <profile>
+      <id>brotli-x86_64</id>
+      <activation>
+        <os>
+          <arch>x86_64</arch>
+        </os>
+      </activation>
+      <dependencies>
+        <dependency>
+          <groupId>com.github.rdblue</groupId>
+          <artifactId>brotli-codec</artifactId>
+          <version>0.1.1</version>
+          <scope>test</scope>
+        </dependency>
+      </dependencies>
+    </profile>
   </profiles>
-
   <build>
     <plugins>
       <plugin>
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java b/exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java
index 312a9c4a60..1ca51bc469 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/ExecConstants.java
@@ -274,7 +274,7 @@ public final class ExecConstants {
   public static final String HTTP_WEB_CLIENT_RESULTSET_AUTOLIMIT_CHECKED = "drill.exec.http.web.client.resultset.autolimit.checked";
   public static final String HTTP_WEB_CLIENT_RESULTSET_AUTOLIMIT_ROWS = "drill.exec.http.web.client.resultset.autolimit.rows";
   public static final String HTTP_WEB_CLIENT_RESULTSET_ROWS_PER_PAGE_VALUES = "drill.exec.http.web.client.resultset.rowsPerPageValues";
-  //Control Heap usage runaway
+  @Deprecated // TODO: Remove any logic based on this option now that REST query results stream.
   public static final String HTTP_MEMORY_HEAP_FAILURE_THRESHOLD = "drill.exec.http.memory.heap.failure.threshold";
   //Customize filters in options
   public static final String HTTP_WEB_OPTIONS_FILTERS = "drill.exec.http.web.options.filters";
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/StringFunctions.java b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/StringFunctions.java
index 27b06448db..305cfcdac3 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/StringFunctions.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/expr/fn/impl/StringFunctions.java
@@ -894,10 +894,15 @@ public class StringFunctions{
 
     @Override
     public void eval() {
-      out.buffer = buffer;
       out.start = out.end = 0;
       int fromL = from.end - from.start;
       int textL = text.end - text.start;
+      if (buffer.capacity() < textL) {
+        // We realloc buffer, if actual length is more than previously applied.
+        out.buffer = buffer.reallocIfNeeded(textL);
+      } else {
+        out.buffer = buffer;
+      }
 
       if (fromL > 0 && fromL <= textL) {
         //If "from" is not empty and it's length is no longer than text's length
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/DrillReduceAggregatesRule.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/DrillReduceAggregatesRule.java
index 009766546b..062fda0c34 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/DrillReduceAggregatesRule.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/logical/DrillReduceAggregatesRule.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.exec.planner.logical;
 
+import org.apache.drill.exec.planner.sql.DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper;
 import org.apache.drill.shaded.guava.com.google.common.collect.ImmutableList;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
 import org.apache.drill.shaded.guava.com.google.common.collect.Maps;
@@ -340,7 +341,8 @@ public class DrillReduceAggregatesRule extends RelOptRule {
             sumType,
             sumType.isNullable() || nGroups == 0);
     SqlAggFunction sumAgg =
-        new DrillCalciteSqlAggFunctionWrapper(new SqlSumEmptyIsZeroAggFunction(), sumType);
+        new DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
+          new SqlSumEmptyIsZeroAggFunction(), sumType);
     AggregateCall sumCall = AggregateCall.create(sumAgg, oldCall.isDistinct(),
         oldCall.isApproximate(), oldCall.getArgList(), -1, sumType, null);
     final SqlCountAggFunction countAgg = (SqlCountAggFunction) SqlStdOperatorTable.COUNT;
@@ -437,7 +439,7 @@ public class DrillReduceAggregatesRule extends RelOptRule {
           typeFactory.createTypeWithNullability(
               oldCall.getType(), argType.isNullable());
     }
-    sumZeroAgg = new DrillCalciteSqlAggFunctionWrapper(
+    sumZeroAgg = new DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
         new SqlSumEmptyIsZeroAggFunction(), sumType);
     AggregateCall sumZeroCall = AggregateCall.create(sumZeroAgg, oldCall.isDistinct(),
         oldCall.isApproximate(), oldCall.getArgList(), -1, sumType, null);
@@ -713,7 +715,7 @@ public class DrillReduceAggregatesRule extends RelOptRule {
           final RelDataType argType = oldAggregateCall.getType();
           final RelDataType sumType = oldAggRel.getCluster().getTypeFactory()
               .createTypeWithNullability(argType, argType.isNullable());
-          final SqlAggFunction sumZeroAgg = new DrillCalciteSqlAggFunctionWrapper(
+          final SqlAggFunction sumZeroAgg = new DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
               new SqlSumEmptyIsZeroAggFunction(), sumType);
           AggregateCall sumZeroCall =
               AggregateCall.create(
@@ -775,7 +777,7 @@ public class DrillReduceAggregatesRule extends RelOptRule {
             final RelDataType argType = rexWinAggCall.getType();
             final RelDataType sumType = oldWinRel.getCluster().getTypeFactory()
                 .createTypeWithNullability(argType, argType.isNullable());
-            final SqlAggFunction sumZeroAgg = new DrillCalciteSqlAggFunctionWrapper(
+            final SqlAggFunction sumZeroAgg = new DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
                 new SqlSumEmptyIsZeroAggFunction(), sumType);
             final Window.RexWinAggCall sumZeroCall =
                 new Window.RexWinAggCall(
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper.java
new file mode 100644
index 0000000000..7f221a0368
--- /dev/null
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper.java
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.drill.exec.planner.sql;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.SqlCall;
+import org.apache.calcite.sql.SqlCallBinding;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.calcite.sql.SqlOperatorBinding;
+import org.apache.calcite.sql.SqlSyntax;
+import org.apache.calcite.sql.fun.SqlSumEmptyIsZeroAggFunction;
+import org.apache.calcite.sql.type.SqlReturnTypeInference;
+import org.apache.calcite.sql.validate.SqlMonotonicity;
+import org.apache.calcite.sql.validate.SqlValidator;
+import org.apache.calcite.sql.validate.SqlValidatorScope;
+import org.apache.calcite.util.Litmus;
+import org.apache.calcite.util.Util;
+import org.apache.drill.exec.expr.fn.DrillFuncHolder;
+
+import java.util.List;
+
+/**
+ * This class serves as a wrapper class for {@link SqlSumEmptyIsZeroAggFunction}
+ * with the same goal as {@link DrillCalciteSqlAggFunctionWrapper}
+ * but extends {@link SqlSumEmptyIsZeroAggFunction} to allow using
+ * additional Calcite functionality designated for {@link SqlSumEmptyIsZeroAggFunction}.
+ */
+public class DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper
+  extends SqlSumEmptyIsZeroAggFunction implements DrillCalciteSqlWrapper {
+
+  private final SqlAggFunction operator;
+
+  private final SqlReturnTypeInference sqlReturnTypeInference;
+
+  private DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
+    SqlSumEmptyIsZeroAggFunction sqlAggFunction,
+    SqlReturnTypeInference sqlReturnTypeInference) {
+    this.sqlReturnTypeInference = sqlReturnTypeInference;
+    this.operator = sqlAggFunction;
+  }
+
+  public DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
+    SqlSumEmptyIsZeroAggFunction sqlAggFunction,
+    List<DrillFuncHolder> functions) {
+    this(sqlAggFunction,
+      TypeInferenceUtils.getDrillSqlReturnTypeInference(
+        sqlAggFunction.getName(),
+        functions));
+  }
+
+  public DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
+    SqlSumEmptyIsZeroAggFunction sqlAggFunction,
+    RelDataType relDataType) {
+    this(sqlAggFunction, opBinding -> relDataType);
+  }
+
+  @Override
+  public SqlOperator getOperator() {
+    return operator;
+  }
+
+  @Override
+  public SqlReturnTypeInference getReturnTypeInference() {
+    return this.sqlReturnTypeInference;
+  }
+
+  @Override
+  public RelDataType inferReturnType(SqlOperatorBinding opBinding) {
+    if (this.sqlReturnTypeInference != null) {
+      RelDataType returnType = this.sqlReturnTypeInference.inferReturnType(opBinding);
+      if (returnType == null) {
+        throw new IllegalArgumentException(String.format(
+          "Cannot infer return type for %s; operand types: %s",
+          opBinding.getOperator(), opBinding.collectOperandTypes()));
+      } else {
+        return returnType;
+      }
+    } else {
+      throw Util.needToImplement(this);
+    }
+  }
+
+  @Override
+  public boolean validRexOperands(int count, Litmus litmus) {
+    return true;
+  }
+
+  @Override
+  public String getAllowedSignatures(String opNameToUse) {
+    return operator.getAllowedSignatures(opNameToUse);
+  }
+
+  @Override
+  public boolean isAggregator() {
+    return operator.isAggregator();
+  }
+
+  @Override
+  public boolean allowsFraming() {
+    return operator.allowsFraming();
+  }
+
+  @Override
+  public SqlMonotonicity getMonotonicity(SqlOperatorBinding call) {
+    return operator.getMonotonicity(call);
+  }
+
+  @Override
+  public boolean isDeterministic() {
+    return operator.isDeterministic();
+  }
+
+  @Override
+  public boolean isDynamicFunction() {
+    return operator.isDynamicFunction();
+  }
+
+  @Override
+  public boolean requiresDecimalExpansion() {
+    return operator.requiresDecimalExpansion();
+  }
+
+  @Override
+  public boolean argumentMustBeScalar(int ordinal) {
+    return operator.argumentMustBeScalar(ordinal);
+  }
+
+  @Override
+  public boolean checkOperandTypes(
+    SqlCallBinding callBinding,
+    boolean throwOnFailure) {
+    return true;
+  }
+
+  @Override
+  public SqlSyntax getSyntax() {
+    return operator.getSyntax();
+  }
+
+  @Override
+  public List<String> getParamNames() {
+    return operator.getParamNames();
+  }
+
+  @Override
+  public String getSignatureTemplate(final int operandsCount) {
+    return operator.getSignatureTemplate(operandsCount);
+  }
+
+  @Override
+  public RelDataType deriveType(
+    SqlValidator validator,
+    SqlValidatorScope scope,
+    SqlCall call) {
+    return operator.deriveType(validator, scope, call);
+  }
+}
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillOperatorTable.java b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillOperatorTable.java
index d4022571bb..0b112eba4e 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillOperatorTable.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/planner/sql/DrillOperatorTable.java
@@ -17,6 +17,7 @@
  */
 package org.apache.drill.exec.planner.sql;
 
+import org.apache.calcite.sql.fun.SqlSumEmptyIsZeroAggFunction;
 import org.apache.calcite.sql.validate.SqlNameMatcher;
 import org.apache.drill.shaded.guava.com.google.common.collect.ArrayListMultimap;
 import org.apache.drill.shaded.guava.com.google.common.collect.Lists;
@@ -169,7 +170,11 @@ public class DrillOperatorTable extends SqlStdOperatorTable {
   private void populateWrappedCalciteOperators() {
     for (SqlOperator calciteOperator : inner.getOperatorList()) {
       final SqlOperator wrapper;
-      if (calciteOperator instanceof SqlAggFunction) {
+      if (calciteOperator instanceof SqlSumEmptyIsZeroAggFunction) {
+        wrapper = new DrillCalciteSqlSumEmptyIsZeroAggFunctionWrapper(
+          (SqlSumEmptyIsZeroAggFunction) calciteOperator,
+          getFunctionListWithInference(calciteOperator.getName()));
+      } else if (calciteOperator instanceof SqlAggFunction) {
         wrapper = new DrillCalciteSqlAggFunctionWrapper((SqlAggFunction) calciteOperator,
             getFunctionListWithInference(calciteOperator.getName()));
       } else if (calciteOperator instanceof SqlFunction) {
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/record/ColumnConverter.java b/exec/java-exec/src/main/java/org/apache/drill/exec/record/ColumnConverter.java
index d81db766a7..e8cad3b535 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/record/ColumnConverter.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/record/ColumnConverter.java
@@ -219,7 +219,7 @@ public interface ColumnConverter {
       }
     }
 
-    private MinorType getScalarMinorType(Class<?> clazz) {
+    protected MinorType getScalarMinorType(Class<?> clazz) {
       if (clazz == byte.class || clazz == Byte.class) {
         return MinorType.TINYINT;
       } else if (clazz == short.class || clazz == Short.class) {
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/BaseQueryRunner.java b/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/BaseQueryRunner.java
index 842cb8e2f9..17caa14907 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/BaseQueryRunner.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/BaseQueryRunner.java
@@ -57,28 +57,39 @@ public abstract class BaseQueryRunner {
   }
 
   protected void applyUserName(String userName) {
-    if (!Strings.isNullOrEmpty(userName)) {
-      DrillConfig config = workManager.getContext().getConfig();
-      if (!config.getBoolean(ExecConstants.IMPERSONATION_ENABLED)) {
-        throw UserException.permissionError()
-          .message("User impersonation is not enabled")
-          .build(logger);
-      }
-      InboundImpersonationManager inboundImpersonationManager = new InboundImpersonationManager();
-      boolean isAdmin = !config.getBoolean(ExecConstants.USER_AUTHENTICATION_ENABLED) ||
-        ImpersonationUtil.hasAdminPrivileges(
-            webUserConnection.getSession().getCredentials().getUserName(),
-            ExecConstants.ADMIN_USERS_VALIDATOR.getAdminUsers(options),
-            ExecConstants.ADMIN_USER_GROUPS_VALIDATOR.getAdminUserGroups(options));
-      if (isAdmin) {
-        // Admin user can impersonate any user they want to (when authentication is disabled, all users are admin)
-        webUserConnection.getSession().replaceUserCredentials(
-          inboundImpersonationManager,
-          UserBitShared.UserCredentials.newBuilder().setUserName(userName).build());
-      } else {
-        // Check configured impersonation rules to see if this user is allowed to impersonate the given user
-        inboundImpersonationManager.replaceUserOnSession(userName, webUserConnection.getSession());
-      }
+    if (Strings.isNullOrEmpty(userName)) {
+      return;
+    }
+
+    DrillConfig config = workManager.getContext().getConfig();
+    if (!config.getBoolean(ExecConstants.IMPERSONATION_ENABLED)) {
+      throw UserException.permissionError()
+        .message("User impersonation is not enabled")
+        .build(logger);
+    }
+
+    String proxyUserName = webUserConnection.getSession().getCredentials().getUserName();
+    if (proxyUserName.equals(userName)) {
+      // Either the proxy user is impersonating itself, which is a no-op, or
+      // the userName on the UserSession has already been modified to be the
+      // impersonated user by an earlier request belonging to the same session.
+      return;
+    }
+
+    InboundImpersonationManager inboundImpersonationManager = new InboundImpersonationManager();
+    boolean isAdmin = !config.getBoolean(ExecConstants.USER_AUTHENTICATION_ENABLED) ||
+      ImpersonationUtil.hasAdminPrivileges(
+          proxyUserName,
+          ExecConstants.ADMIN_USERS_VALIDATOR.getAdminUsers(options),
+          ExecConstants.ADMIN_USER_GROUPS_VALIDATOR.getAdminUserGroups(options));
+    if (isAdmin) {
+      // Admin user can impersonate any user they want to (when authentication is disabled, all users are admin)
+      webUserConnection.getSession().replaceUserCredentials(
+        inboundImpersonationManager,
+        UserBitShared.UserCredentials.newBuilder().setUserName(userName).build());
+    } else {
+      // Check configured impersonation rules to see if this user is allowed to impersonate the given user
+      inboundImpersonationManager.replaceUserOnSession(userName, webUserConnection.getSession());
     }
   }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/DrillRestServer.java b/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/DrillRestServer.java
index 8473d64039..b3de95a137 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/DrillRestServer.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/server/rest/DrillRestServer.java
@@ -310,7 +310,14 @@ public class DrillRestServer extends ResourceConfig {
      * @param config drill config
      * @param request client request
      * @return session user principal
+     *
+     * @deprecated a userName property has since been added to POST /query.json.
+     * and the web UI now never sets a User-Name header. The restriction to
+     * unauthenticated Drill is also not enough for general impersonation.
+     * Choose one way to request impersonation over HTTP, this or the other.
+     * @link{org.apache.drill.exec.server.rest.QueryResources#submitQuery}
      */
+    @Deprecated
     private Principal createSessionUserPrincipal(DrillConfig config, HttpServletRequest request) {
       if (WebServer.isOnlyImpersonationEnabled(config)) {
         final String userName = request.getHeader("User-Name");
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/ColumnConverterFactoryProvider.java
similarity index 64%
copy from exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
copy to exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/ColumnConverterFactoryProvider.java
index 83f8f72052..28aec71d0f 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/ColumnConverterFactoryProvider.java
@@ -15,22 +15,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.drill.exec.store.enumerable.plan;
+package org.apache.drill.exec.store.enumerable;
 
-import org.apache.calcite.plan.RelOptCluster;
-import org.apache.calcite.rel.RelNode;
+import com.fasterxml.jackson.annotation.JsonTypeInfo;
+import org.apache.drill.exec.record.ColumnConverterFactory;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
 
-import java.util.Map;
+@JsonTypeInfo(use = JsonTypeInfo.Id.NAME)
+public interface ColumnConverterFactoryProvider {
 
-public interface EnumerablePrelContext {
-
-  String generateCode(RelOptCluster cluster, RelNode relNode);
-
-  RelNode transformNode(RelNode input);
-
-  Map<String, Integer> getFieldsMap(RelNode transformedNode);
-
-  String getPlanPrefix();
-
-  String getTablePath(RelNode input);
+  ColumnConverterFactory getFactory(TupleMetadata schema);
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/DefaultColumnConverterFactoryProvider.java
similarity index 61%
copy from exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
copy to exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/DefaultColumnConverterFactoryProvider.java
index 83f8f72052..75573abfa0 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/DefaultColumnConverterFactoryProvider.java
@@ -15,22 +15,16 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.drill.exec.store.enumerable.plan;
+package org.apache.drill.exec.store.enumerable;
 
-import org.apache.calcite.plan.RelOptCluster;
-import org.apache.calcite.rel.RelNode;
+import org.apache.drill.exec.record.ColumnConverterFactory;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
 
-import java.util.Map;
+public class DefaultColumnConverterFactoryProvider implements ColumnConverterFactoryProvider {
+  public static ColumnConverterFactoryProvider INSTANCE = new DefaultColumnConverterFactoryProvider();
 
-public interface EnumerablePrelContext {
-
-  String generateCode(RelOptCluster cluster, RelNode relNode);
-
-  RelNode transformNode(RelNode input);
-
-  Map<String, Integer> getFieldsMap(RelNode transformedNode);
-
-  String getPlanPrefix();
-
-  String getTablePath(RelNode input);
+  @Override
+  public ColumnConverterFactory getFactory(TupleMetadata schema) {
+    return new ColumnConverterFactory(schema);
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableBatchCreator.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableBatchCreator.java
index 9c4ca54ed2..2dec45a61a 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableBatchCreator.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableBatchCreator.java
@@ -61,7 +61,7 @@ public class EnumerableBatchCreator implements BatchCreator<EnumerableSubScan> {
     builder.providedSchema(subScan.getSchema());
 
     ManagedReader<SchemaNegotiator> reader = new EnumerableRecordReader(subScan.getColumns(),
-        subScan.getFieldsMap(), subScan.getCode(), subScan.getSchemaPath());
+        subScan.getFieldsMap(), subScan.getCode(), subScan.getSchemaPath(), subScan.factoryProvider());
     ManagedScanFramework.ReaderFactory readerFactory = new BasicScanFactory(Collections.singletonList(reader).iterator());
     builder.setReaderFactory(readerFactory);
     builder.nullType(Types.optional(TypeProtos.MinorType.VARCHAR));
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableGroupScan.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableGroupScan.java
index 815a5c2d02..e74a97f970 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableGroupScan.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableGroupScan.java
@@ -39,6 +39,7 @@ public class EnumerableGroupScan extends AbstractGroupScan {
   private final List<SchemaPath> columns;
   private final double rows;
   private final TupleMetadata schema;
+  private final ColumnConverterFactoryProvider converterFactoryProvider;
 
   @JsonCreator
   public EnumerableGroupScan(
@@ -47,7 +48,8 @@ public class EnumerableGroupScan extends AbstractGroupScan {
       @JsonProperty("fieldsMap") Map<String, Integer> fieldsMap,
       @JsonProperty("rows") double rows,
       @JsonProperty("schema") TupleMetadata schema,
-      @JsonProperty("schemaPath") String schemaPath) {
+      @JsonProperty("schemaPath") String schemaPath,
+      @JsonProperty("converterFactoryProvider") ColumnConverterFactoryProvider converterFactoryProvider) {
     super("");
     this.code = code;
     this.columns = columns;
@@ -55,6 +57,7 @@ public class EnumerableGroupScan extends AbstractGroupScan {
     this.rows = rows;
     this.schema = schema;
     this.schemaPath = schemaPath;
+    this.converterFactoryProvider = converterFactoryProvider;
   }
 
   @Override
@@ -63,7 +66,7 @@ public class EnumerableGroupScan extends AbstractGroupScan {
 
   @Override
   public SubScan getSpecificScan(int minorFragmentId) {
-    return new EnumerableSubScan(code, columns, fieldsMap, schema, schemaPath);
+    return new EnumerableSubScan(code, columns, fieldsMap, schema, schemaPath, converterFactoryProvider);
   }
 
   @Override
@@ -105,6 +108,10 @@ public class EnumerableGroupScan extends AbstractGroupScan {
     return schemaPath;
   }
 
+  public ColumnConverterFactoryProvider getConverterFactoryProvider() {
+    return converterFactoryProvider;
+  }
+
   @Override
   public String getDigest() {
     return toString();
@@ -113,7 +120,7 @@ public class EnumerableGroupScan extends AbstractGroupScan {
   @Override
   public PhysicalOperator getNewWithChildren(List<PhysicalOperator> children) {
     Preconditions.checkArgument(children.isEmpty());
-    return new EnumerableGroupScan(code, columns, fieldsMap, rows, schema, schemaPath);
+    return new EnumerableGroupScan(code, columns, fieldsMap, rows, schema, schemaPath, converterFactoryProvider);
   }
 
   @Override
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableRecordReader.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableRecordReader.java
index 85ea7c4651..ee079235cc 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableRecordReader.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableRecordReader.java
@@ -66,17 +66,21 @@ public class EnumerableRecordReader implements ManagedReader<SchemaNegotiator> {
 
   private final String schemaPath;
 
+  private final ColumnConverterFactoryProvider factoryProvider;
+
   private ColumnConverter converter;
 
   private Iterator<Map<String, Object>> records;
 
   private ResultSetLoader loader;
 
-  public EnumerableRecordReader(List<SchemaPath> columns, Map<String, Integer> fieldsMap, String code, String schemaPath) {
+  public EnumerableRecordReader(List<SchemaPath> columns, Map<String, Integer> fieldsMap,
+    String code, String schemaPath, ColumnConverterFactoryProvider factoryProvider) {
     this.columns = columns;
     this.fieldsMap = fieldsMap;
     this.code = code;
     this.schemaPath = schemaPath;
+    this.factoryProvider = factoryProvider;
   }
 
   @SuppressWarnings("unchecked")
@@ -140,7 +144,7 @@ public class EnumerableRecordReader implements ManagedReader<SchemaNegotiator> {
     TupleMetadata providedSchema = negotiator.providedSchema();
     loader = negotiator.build();
     setup(negotiator.context());
-    ColumnConverterFactory factory = new ColumnConverterFactory(providedSchema);
+    ColumnConverterFactory factory = factoryProvider.getFactory(providedSchema);
     converter = factory.getRootConverter(providedSchema, new TupleSchema(), loader.writer());
     return true;
   }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableSubScan.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableSubScan.java
index 85d282b341..4476be8c53 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableSubScan.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/EnumerableSubScan.java
@@ -35,6 +35,7 @@ public class EnumerableSubScan extends AbstractSubScan {
   private final List<SchemaPath> columns;
   private final Map<String, Integer> fieldsMap;
   private final TupleMetadata schema;
+  private final ColumnConverterFactoryProvider converterFactoryProvider;
 
   @JsonCreator
   public EnumerableSubScan(
@@ -42,13 +43,15 @@ public class EnumerableSubScan extends AbstractSubScan {
       @JsonProperty("columns") List<SchemaPath> columns,
       @JsonProperty("fieldsMap") Map<String, Integer> fieldsMap,
       @JsonProperty("schema") TupleMetadata schema,
-      @JsonProperty("schemaPath") String schemaPath) {
+      @JsonProperty("schemaPath") String schemaPath,
+      @JsonProperty("converterFactoryProvider") ColumnConverterFactoryProvider converterFactoryProvider) {
     super("");
     this.code = code;
     this.columns = columns;
     this.fieldsMap = fieldsMap;
     this.schema = schema;
     this.schemaPath = schemaPath;
+    this.converterFactoryProvider = converterFactoryProvider;
   }
 
   @Override
@@ -75,4 +78,8 @@ public class EnumerableSubScan extends AbstractSubScan {
   public String getSchemaPath() {
     return schemaPath;
   }
+
+  public ColumnConverterFactoryProvider factoryProvider() {
+    return converterFactoryProvider;
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrel.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrel.java
index 0df66b590d..2d6e5c8572 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrel.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrel.java
@@ -26,6 +26,7 @@ import org.apache.calcite.rel.RelWriter;
 import org.apache.calcite.rel.core.TableScan;
 import org.apache.calcite.rel.metadata.RelMetadataQuery;
 import org.apache.drill.common.expression.SchemaPath;
+import org.apache.drill.exec.physical.base.GroupScan;
 import org.apache.drill.exec.physical.base.PhysicalOperator;
 import org.apache.drill.exec.planner.common.DrillRelOptUtil;
 import org.apache.drill.exec.planner.physical.LeafPrel;
@@ -34,6 +35,7 @@ import org.apache.drill.exec.planner.physical.visitor.PrelVisitor;
 import org.apache.drill.exec.record.BatchSchema;
 import org.apache.drill.exec.record.metadata.TupleMetadata;
 import org.apache.drill.exec.record.metadata.schema.SchemaProvider;
+import org.apache.drill.exec.store.enumerable.ColumnConverterFactoryProvider;
 import org.apache.drill.exec.store.enumerable.EnumerableGroupScan;
 
 import java.io.IOException;
@@ -54,6 +56,7 @@ public class EnumerablePrel extends AbstractRelNode implements LeafPrel {
   private final Map<String, Integer> fieldsMap;
   private final TupleMetadata schema;
   private final String planPrefix;
+  private final ColumnConverterFactoryProvider factoryProvider;
 
   public EnumerablePrel(RelOptCluster cluster, RelTraitSet traitSet, RelNode input, EnumerablePrelContext context) {
     super(cluster, traitSet);
@@ -77,6 +80,7 @@ public class EnumerablePrel extends AbstractRelNode implements LeafPrel {
     } catch (IOException e) {
       throw new RuntimeException(e);
     }
+    factoryProvider = context.factoryProvider();
   }
 
   @Override
@@ -84,7 +88,8 @@ public class EnumerablePrel extends AbstractRelNode implements LeafPrel {
     List<SchemaPath> columns = rowType.getFieldNames().stream()
         .map(SchemaPath::getSimplePath)
         .collect(Collectors.toList());
-    EnumerableGroupScan groupScan = new EnumerableGroupScan(code, columns, fieldsMap, rows, schema, schemaPath);
+    GroupScan groupScan =
+      new EnumerableGroupScan(code, columns, fieldsMap, rows, schema, schemaPath, factoryProvider);
     return creator.addMetadata(this, groupScan);
   }
 
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
index 83f8f72052..6250421f98 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/enumerable/plan/EnumerablePrelContext.java
@@ -19,6 +19,8 @@ package org.apache.drill.exec.store.enumerable.plan;
 
 import org.apache.calcite.plan.RelOptCluster;
 import org.apache.calcite.rel.RelNode;
+import org.apache.drill.exec.store.enumerable.ColumnConverterFactoryProvider;
+import org.apache.drill.exec.store.enumerable.DefaultColumnConverterFactoryProvider;
 
 import java.util.Map;
 
@@ -33,4 +35,8 @@ public interface EnumerablePrelContext {
   String getPlanPrefix();
 
   String getTablePath(RelNode input);
+
+  default ColumnConverterFactoryProvider factoryProvider() {
+    return DefaultColumnConverterFactoryProvider.INSTANCE;
+  }
 }
diff --git a/exec/java-exec/src/main/java/org/apache/drill/exec/store/plan/AbstractPluginImplementor.java b/exec/java-exec/src/main/java/org/apache/drill/exec/store/plan/AbstractPluginImplementor.java
index ce3a1e4ad6..06384e5d3e 100644
--- a/exec/java-exec/src/main/java/org/apache/drill/exec/store/plan/AbstractPluginImplementor.java
+++ b/exec/java-exec/src/main/java/org/apache/drill/exec/store/plan/AbstractPluginImplementor.java
@@ -31,6 +31,7 @@ import org.apache.drill.exec.physical.base.GroupScan;
 import org.apache.drill.exec.planner.common.DrillLimitRelBase;
 import org.apache.drill.exec.planner.common.DrillRelOptUtil;
 import org.apache.drill.exec.planner.logical.DrillTable;
+import org.apache.drill.exec.store.StoragePlugin;
 import org.apache.drill.exec.store.plan.rel.PluginAggregateRel;
 import org.apache.drill.exec.store.plan.rel.PluginFilterRel;
 import org.apache.drill.exec.store.plan.rel.PluginJoinRel;
@@ -152,9 +153,16 @@ public abstract class AbstractPluginImplementor implements PluginImplementor {
     CheckedFunction<DrillTable, GroupScan, IOException> groupScanFunction = DrillTable::getGroupScan;
     return Optional.ofNullable(DrillRelOptUtil.findScan(node))
       .map(DrillRelOptUtil::getDrillTable)
+      .filter(this::supportsDrillTable)
       .map(groupScanFunction)
       .orElse(null);
   }
 
+  private boolean supportsDrillTable(DrillTable table) {
+    return supportedPlugin().isInstance(table.getPlugin());
+  }
+
+  protected abstract Class<? extends StoragePlugin> supportedPlugin();
+
   protected abstract boolean hasPluginGroupScan(RelNode node);
 }
diff --git a/exec/java-exec/src/main/java/org/apache/parquet/hadoop/ColumnChunkIncReadStore.java b/exec/java-exec/src/main/java/org/apache/parquet/hadoop/ColumnChunkIncReadStore.java
index 3ad0a7ac97..773a861213 100644
--- a/exec/java-exec/src/main/java/org/apache/parquet/hadoop/ColumnChunkIncReadStore.java
+++ b/exec/java-exec/src/main/java/org/apache/parquet/hadoop/ColumnChunkIncReadStore.java
@@ -18,7 +18,6 @@
 package org.apache.parquet.hadoop;
 
 import java.io.IOException;
-import java.nio.Buffer;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.HashMap;
@@ -216,20 +215,20 @@ public class ColumnChunkIncReadStore implements PageReadStore {
               // Note that the repetition and definition levels are stored uncompressed in
               // the v2 page format.
               int pageBufOffset = 0;
-              ByteBuffer bb = (ByteBuffer) ((Buffer)pageBuf).position(pageBufOffset);
+              ByteBuffer bb = (ByteBuffer) pageBuf.position(pageBufOffset);
               BytesInput repLevelBytes = BytesInput.from(
                 (ByteBuffer) bb.slice().limit(pageBufOffset + repLevelSize)
               );
               pageBufOffset += repLevelSize;
 
-              bb = (ByteBuffer) ((Buffer)pageBuf).position(pageBufOffset);
+              bb = (ByteBuffer) pageBuf.position(pageBufOffset);
               final BytesInput defLevelBytes = BytesInput.from(
                 (ByteBuffer) bb.slice().limit(pageBufOffset + defLevelSize)
               );
               pageBufOffset += defLevelSize;
 
               // we've now reached the beginning of compressed column data
-              bb = (ByteBuffer) ((Buffer)pageBuf).position(pageBufOffset);
+              bb = (ByteBuffer) pageBuf.position(pageBufOffset);
               final BytesInput colDataBytes = decompressor.decompress(
                 BytesInput.from((ByteBuffer) bb.slice()),
                 pageSize - repLevelSize - defLevelSize
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestStringFunctions.java b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestStringFunctions.java
index 555323b333..8dc109397a 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestStringFunctions.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/expr/fn/impl/TestStringFunctions.java
@@ -19,6 +19,7 @@ package org.apache.drill.exec.expr.fn.impl;
 
 import static org.junit.Assert.assertTrue;
 
+import org.apache.commons.lang3.RandomStringUtils;
 import org.apache.drill.categories.UnlikelyTest;
 import org.apache.drill.test.BaseTestQuery;
 import org.apache.drill.categories.SqlFunctionTest;
@@ -335,6 +336,19 @@ public class TestStringFunctions extends BaseTestQuery {
         .run();
   }
 
+  @Test
+  public void testReplaceOutBuffer() throws Exception {
+    String originValue = RandomStringUtils.randomAlphabetic(8192).toLowerCase() + "12345";
+    String expectValue = originValue.replace("12345", "67890");
+    String sql = "select replace(c1, '12345', '67890') as col from (values('" + originValue + "')) as t(c1)";
+    testBuilder()
+      .sqlQuery(sql)
+      .ordered()
+      .baselineColumns("col")
+      .baselineValues(expectValue)
+      .go();
+  }
+
   @Test
   public void testLikeStartsWith() throws Exception {
 
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/writer/TestParquetWriter.java b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/writer/TestParquetWriter.java
index 7908c10cf5..80ba94418b 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/writer/TestParquetWriter.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/physical/impl/writer/TestParquetWriter.java
@@ -52,8 +52,9 @@ import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
+import org.junit.jupiter.api.condition.DisabledOnOs;
 import org.junit.jupiter.api.condition.EnabledIfSystemProperty;
-import org.junit.jupiter.api.condition.DisabledIfSystemProperty;
+import org.junit.jupiter.api.condition.OS;
 
 import java.io.File;
 import java.io.FileWriter;
@@ -999,12 +1000,11 @@ public class TestParquetWriter extends ClusterTest {
     }
   }
 
-  // We currently bundle the JNI-based com.rdblue.brotli-codec and it only provides
-  // natives for Mac and Linux on AMD64.  See PARQUET-1975.
+  // Only attempt this test on Linux / amd64 because com.rdblue.brotli-codec
+  // only bundles natives for Mac and Linux on AMD64.  See PARQUET-1975.
   @Test
-  @DisabledIfSystemProperty(named = "os.name", matches = "Windows")
-  @EnabledIfSystemProperty(named = "os.arch", matches = "amd64") // reported for Linux on AMD64
-  @EnabledIfSystemProperty(named = "os.arch", matches = "x86_64") // reported for OS X on AMD64
+  @EnabledIfSystemProperty(named = "os.arch", matches = "(amd64|x86_64)")
+  @DisabledOnOs({ OS.WINDOWS })
   public void testTPCHReadWriteBrotli() throws Exception {
     try {
       client.alterSession(ExecConstants.PARQUET_WRITER_COMPRESSION_TYPE, "brotli");
diff --git a/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestDrillbitResilience.java b/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestDrillbitResilience.java
index 6abc62bed8..564112e1fc 100644
--- a/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestDrillbitResilience.java
+++ b/exec/java-exec/src/test/java/org/apache/drill/exec/server/TestDrillbitResilience.java
@@ -18,7 +18,6 @@
 package org.apache.drill.exec.server;
 
 import static org.apache.drill.exec.ExecConstants.SLICE_TARGET;
-import static org.apache.drill.exec.ExecConstants.SLICE_TARGET_DEFAULT;
 import static org.apache.drill.exec.planner.physical.PlannerSettings.ENABLE_HASH_AGG_OPTION;
 import static org.apache.drill.exec.planner.physical.PlannerSettings.PARTITION_SENDER_SET_THREADS;
 import static org.junit.jupiter.api.Assertions.assertEquals;
@@ -117,7 +116,7 @@ public class TestDrillbitResilience extends ClusterTest {
    */
   private static final int NUM_RUNS = 3;
   private static final int PROBLEMATIC_TEST_NUM_RUNS = 3;
-  private static final int TIMEOUT = 10;
+  private static final int TIMEOUT = 15;
   private final static Level CURRENT_LOG_LEVEL = Level.DEBUG;
 
   /**
@@ -619,7 +618,7 @@ public class TestDrillbitResilience extends ClusterTest {
       final long after = countAllocatedMemory();
       assertEquals(before, after, () -> String.format("We are leaking %d bytes", after - before));
     } finally {
-      client.alterSession(SLICE_TARGET, Long.toString(SLICE_TARGET_DEFAULT));
+      client.resetSession(SLICE_TARGET);
     }
   }
 
@@ -651,7 +650,7 @@ public class TestDrillbitResilience extends ClusterTest {
       assertEquals(before, after, () -> String.format("We are leaking %d bytes", after - before));
 
     } finally {
-      client.alterSession(SLICE_TARGET, Long.toString(SLICE_TARGET_DEFAULT));
+      client.resetSession(SLICE_TARGET);
     }
   }
 
@@ -809,7 +808,7 @@ public class TestDrillbitResilience extends ClusterTest {
       try {
         Thread.sleep(1000);
       } catch (InterruptedException e) {
-        logger.debug("Sleep thread interrupted. Ignore it");
+        logger.debug("Cancelling thread interrupted. Ignore it");
         // just ignore
       }
       logger.debug("Cancelling {} query started", queryId);
@@ -913,7 +912,7 @@ public class TestDrillbitResilience extends ClusterTest {
     // wait to make sure all fragments finished cleaning up
     try {
       logger.debug("Sleep thread for 2 seconds");
-      Thread.sleep(2000);
+      Thread.sleep(1500); // 1500
     } catch (InterruptedException e) {
       logger.debug("Sleep thread interrupted. Ignore it", e);
       // just ignore
diff --git a/exec/jdbc-all/pom.xml b/exec/jdbc-all/pom.xml
index 7cb4518cc2..4701867203 100644
--- a/exec/jdbc-all/pom.xml
+++ b/exec/jdbc-all/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <groupId>org.apache.drill.exec</groupId>
     <artifactId>exec-parent</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-jdbc-all</artifactId>
diff --git a/exec/jdbc/pom.xml b/exec/jdbc/pom.xml
index 2182f8f43c..6fb413cc93 100644
--- a/exec/jdbc/pom.xml
+++ b/exec/jdbc/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <groupId>org.apache.drill.exec</groupId>
     <artifactId>exec-parent</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <artifactId>drill-jdbc</artifactId>
   <name>Drill : Exec : JDBC Driver using dependencies</name>
@@ -32,12 +32,6 @@
     <dependency>
       <groupId>org.apache.calcite.avatica</groupId>
       <artifactId>avatica</artifactId>
-      <exclusions>
-        <exclusion>
-          <artifactId>jackson-core</artifactId>
-          <groupId>com.fasterxml.jackson.core</groupId>
-        </exclusion>
-      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.drill</groupId>
@@ -61,18 +55,6 @@
       <classifier>tests</classifier>
       <scope>test</scope>
     </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.core</groupId>
-      <artifactId>jackson-core</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.core</groupId>
-      <artifactId>jackson-annotations</artifactId>
-    </dependency>
-    <dependency>
-      <groupId>com.fasterxml.jackson.core</groupId>
-      <artifactId>jackson-databind</artifactId>
-    </dependency>    
     <dependency>
       <groupId>net.hydromatic</groupId>
       <artifactId>foodmart-queries</artifactId>
diff --git a/exec/memory/base/pom.xml b/exec/memory/base/pom.xml
index 6d07bcb505..156847e763 100644
--- a/exec/memory/base/pom.xml
+++ b/exec/memory/base/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>memory-parent</artifactId>
     <groupId>org.apache.drill.memory</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <artifactId>drill-memory-base</artifactId>
   <name>Drill : Exec : Memory : Base</name>
diff --git a/exec/memory/pom.xml b/exec/memory/pom.xml
index 2e46d1b44e..231294d5d2 100644
--- a/exec/memory/pom.xml
+++ b/exec/memory/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>exec-parent</artifactId>
     <groupId>org.apache.drill.exec</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <groupId>org.apache.drill.memory</groupId>
diff --git a/exec/pom.xml b/exec/pom.xml
index 0febf01b4a..e4d5a841ce 100644
--- a/exec/pom.xml
+++ b/exec/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <groupId>org.apache.drill.exec</groupId>
diff --git a/exec/rpc/pom.xml b/exec/rpc/pom.xml
index 7c2b539a3c..15a64fd9af 100644
--- a/exec/rpc/pom.xml
+++ b/exec/rpc/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>exec-parent</artifactId>
     <groupId>org.apache.drill.exec</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <artifactId>drill-rpc</artifactId>
   <name>Drill : Exec : RPC</name>
diff --git a/exec/vector/pom.xml b/exec/vector/pom.xml
index 4a94df36f6..617926f971 100644
--- a/exec/vector/pom.xml
+++ b/exec/vector/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>exec-parent</artifactId>
     <groupId>org.apache.drill.exec</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <artifactId>vector</artifactId>
   <name>Drill : Exec : Vectors</name>
diff --git a/logical/pom.xml b/logical/pom.xml
index 149860d083..c347b80867 100644
--- a/logical/pom.xml
+++ b/logical/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-logical</artifactId>
diff --git a/metastore/iceberg-metastore/pom.xml b/metastore/iceberg-metastore/pom.xml
index 500ebb2ec9..961b4ef3c6 100644
--- a/metastore/iceberg-metastore/pom.xml
+++ b/metastore/iceberg-metastore/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>metastore-parent</artifactId>
     <groupId>org.apache.drill.metastore</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-iceberg-metastore</artifactId>
diff --git a/metastore/metastore-api/pom.xml b/metastore/metastore-api/pom.xml
index 66822ead1d..a1d8754fd8 100644
--- a/metastore/metastore-api/pom.xml
+++ b/metastore/metastore-api/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <groupId>org.apache.drill.metastore</groupId>
     <artifactId>metastore-parent</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-metastore-api</artifactId>
diff --git a/metastore/mongo-metastore/pom.xml b/metastore/mongo-metastore/pom.xml
index 4e83173db4..d1a5409e33 100644
--- a/metastore/mongo-metastore/pom.xml
+++ b/metastore/mongo-metastore/pom.xml
@@ -22,7 +22,7 @@
   <parent>
     <artifactId>metastore-parent</artifactId>
     <groupId>org.apache.drill.metastore</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
diff --git a/metastore/pom.xml b/metastore/pom.xml
index 925fe06d48..ad4f6dfafe 100644
--- a/metastore/pom.xml
+++ b/metastore/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <groupId>org.apache.drill</groupId>
     <artifactId>drill-root</artifactId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <groupId>org.apache.drill.metastore</groupId>
diff --git a/metastore/rdbms-metastore/pom.xml b/metastore/rdbms-metastore/pom.xml
index a787c9ca69..0d2424c03c 100644
--- a/metastore/rdbms-metastore/pom.xml
+++ b/metastore/rdbms-metastore/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>metastore-parent</artifactId>
     <groupId>org.apache.drill.metastore</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-rdbms-metastore</artifactId>
@@ -32,7 +32,7 @@
 
   <properties>
     <jooq.version>3.13.1</jooq.version>
-    <liquibase.version>3.8.7</liquibase.version>
+    <liquibase.version>4.8.0</liquibase.version>
     <sqlite.version>3.30.1</sqlite.version>
   </properties>
 
diff --git a/metastore/rdbms-metastore/src/main/java/org/apache/drill/metastore/rdbms/RdbmsMetastore.java b/metastore/rdbms-metastore/src/main/java/org/apache/drill/metastore/rdbms/RdbmsMetastore.java
index 0d19a52ceb..a8f1355282 100644
--- a/metastore/rdbms-metastore/src/main/java/org/apache/drill/metastore/rdbms/RdbmsMetastore.java
+++ b/metastore/rdbms-metastore/src/main/java/org/apache/drill/metastore/rdbms/RdbmsMetastore.java
@@ -117,6 +117,8 @@ public class RdbmsMetastore implements Metastore {
   private void initTables(DataSource dataSource) {
     try (Connection connection = dataSource.getConnection()) {
       JdbcConnection jdbcConnection = new JdbcConnection(connection);
+      // TODO It is recommended to use the new function if the following issue is resolved.
+      // https://github.com/liquibase/liquibase/issues/2349
       Database database = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(jdbcConnection);
       ClassLoaderResourceAccessor resourceAccessor = new ClassLoaderResourceAccessor();
       try (Liquibase liquibase = new Liquibase(LIQUIBASE_CHANGELOG_FILE, resourceAccessor, database)) {
diff --git a/pom.xml b/pom.xml
index 0dd58d0887..70876d7db5 100644
--- a/pom.xml
+++ b/pom.xml
@@ -30,7 +30,7 @@
 
   <groupId>org.apache.drill</groupId>
   <artifactId>drill-root</artifactId>
-  <version>1.20.0</version>
+  <version>1.20.1-SNAPSHOT</version>
   <packaging>pom</packaging>
 
   <name>Drill : </name>
@@ -60,11 +60,11 @@
       avoid_bad_dependencies plugin found in the file.
     -->
     <calcite.groupId>com.github.vvysotskyi.drill-calcite</calcite.groupId>
-    <calcite.version>1.21.0-drill-r7</calcite.version>
+    <calcite.version>1.21.0-drill-r8</calcite.version>
     <avatica.version>1.17.0</avatica.version>
     <janino.version>3.0.11</janino.version>
     <sqlline.version>1.12.0</sqlline.version>
-    <jackson.version>2.12.6</jackson.version>
+    <jackson.version>2.13.2.20220328</jackson.version>
     <zookeeper.version>3.5.7</zookeeper.version>
     <mapr.release.version>6.1.0-mapr</mapr.release.version>
     <ojai.version>3.0-mapr-1808</ojai.version>
@@ -80,7 +80,7 @@
     <logback.version>1.2.9</logback.version>
     <mockito.version>3.11.2</mockito.version>
     <!--
-      Currently Hive storage plugin only supports Apache Hive 3.1.2 or vendor specific variants of the
+      Currently, Hive storage plugin only supports Apache Hive 3.1.2 or vendor specific variants of the
       Apache Hive 2.3.2. If the version is changed, make sure the jars and their dependencies are updated,
       for example parquet-hadoop-bundle and derby dependencies.
     -->
@@ -100,7 +100,7 @@
     <asm.version>9.2</asm.version>
     <excludedGroups />
     <memoryMb>2500</memoryMb>
-    <directMemoryMb>2500</directMemoryMb>
+    <directMemoryMb>4500</directMemoryMb>
     <rat.skip>true</rat.skip>
     <license.skip>true</license.skip>
     <docker.repository>apache/drill</docker.repository>
@@ -126,10 +126,9 @@
     <commons.cli.version>1.4</commons.cli.version>
     <snakeyaml.version>1.26</snakeyaml.version>
     <commons.lang3.version>3.10</commons.lang3.version>
-    <testcontainers.version>1.16.2</testcontainers.version>
+    <testcontainers.version>1.16.3</testcontainers.version>
     <typesafe.config.version>1.0.0</typesafe.config.version>
     <commons.codec.version>1.14</commons.codec.version>
-    <metadata.extractor.version>2.13.0</metadata.extractor.version>
     <xalan.version>2.7.2</xalan.version>
     <xerces.version>2.12.2</xerces.version>
     <commons.configuration.version>1.10</commons.configuration.version>
@@ -147,7 +146,7 @@
     <connection>scm:git:https://gitbox.apache.org/repos/asf/drill.git</connection>
     <developerConnection>scm:git:https://gitbox.apache.org/repos/asf/drill.git</developerConnection>
     <url>https://github.com/apache/drill</url>
-    <tag>drill-1.20.0</tag>
+    <tag>drill-1.20.1-SNAPSHOT</tag>
   </scm>
 
   <mailingLists>
@@ -1237,6 +1236,13 @@
   <!-- Managed Dependencies -->
   <dependencyManagement>
     <dependencies>
+      <dependency>
+        <groupId>com.fasterxml.jackson</groupId>
+        <artifactId>jackson-bom</artifactId>
+        <version>${jackson.version}</version>
+        <scope>import</scope>
+        <type>pom</type>
+      </dependency>
       <dependency>
         <groupId>net.java.dev.jna</groupId>
         <artifactId>jna</artifactId>
@@ -1284,18 +1290,6 @@
             <groupId>commons-logging</groupId>
             <artifactId>commons-logging</artifactId>
           </exclusion>
-          <exclusion>
-            <groupId>com.fasterxml.jackson.core</groupId>
-            <artifactId>jackson-annotations</artifactId>
-          </exclusion>
-          <exclusion>
-            <groupId>com.fasterxml.jackson.core</groupId>
-            <artifactId>jackson-core</artifactId>
-          </exclusion>
-          <exclusion>
-            <groupId>com.fasterxml.jackson.core</groupId>
-            <artifactId>jackson-databind</artifactId>
-          </exclusion>
         </exclusions>
       </dependency>
       <dependency>
@@ -1672,26 +1666,6 @@
         <artifactId>commons-compiler</artifactId>
         <version>${janino.version}</version>
       </dependency>
-      <dependency>
-        <groupId>com.fasterxml.jackson.core</groupId>
-        <artifactId>jackson-annotations</artifactId>
-        <version>${jackson.version}</version>
-      </dependency>
-      <dependency>
-        <groupId>com.fasterxml.jackson.core</groupId>
-        <artifactId>jackson-databind</artifactId>
-        <version>${jackson.version}</version>
-      </dependency>
-      <dependency>
-        <groupId>com.fasterxml.jackson.core</groupId>
-        <artifactId>jackson-core</artifactId>
-        <version>${jackson.version}</version>
-      </dependency>
-      <dependency>
-        <groupId>com.fasterxml.jackson.datatype</groupId>
-        <artifactId>jackson-datatype-joda</artifactId>
-        <version>${jackson.version}</version>
-      </dependency>
       <dependency>
         <groupId>org.honton.chas.hocon</groupId>
         <artifactId>jackson-dataformat-hocon</artifactId>
@@ -2069,11 +2043,6 @@
         <artifactId>snakeyaml</artifactId>
         <version>${snakeyaml.version}</version>
       </dependency>
-      <dependency>
-        <groupId>com.drewnoakes</groupId>
-        <artifactId>metadata-extractor</artifactId>
-        <version>${metadata.extractor.version}</version>
-      </dependency>
       <dependency>
         <groupId>xerces</groupId>
         <artifactId>xercesImpl</artifactId>
diff --git a/protocol/pom.xml b/protocol/pom.xml
index eb007cbaaa..4406903e6f 100644
--- a/protocol/pom.xml
+++ b/protocol/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-protocol</artifactId>
diff --git a/tools/fmpp/pom.xml b/tools/fmpp/pom.xml
index 2fd7884d39..f650e47c22 100644
--- a/tools/fmpp/pom.xml
+++ b/tools/fmpp/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>tools-parent</artifactId>
     <groupId>org.apache.drill.tools</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <artifactId>drill-fmpp-maven-plugin</artifactId>
diff --git a/tools/pom.xml b/tools/pom.xml
index 9228a5e36f..64d5c70c68 100644
--- a/tools/pom.xml
+++ b/tools/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>drill-root</artifactId>
     <groupId>org.apache.drill</groupId>
-    <version>1.20.0</version>
+    <version>1.20.1-SNAPSHOT</version>
   </parent>
 
   <groupId>org.apache.drill.tools</groupId>
diff --git a/tools/release-scripts/release.sh b/tools/release-scripts/release.sh
index 31aaf72e94..15b0508ea2 100755
--- a/tools/release-scripts/release.sh
+++ b/tools/release-scripts/release.sh
@@ -30,6 +30,7 @@ function runCmd(){
     echo " ----------------- $1 " >> ${DRILL_RELEASE_OUTFILE}
     echo " ----------------- "  >> ${DRILL_RELEASE_OUTFILE}
     shift
+    echo "Will execute $@"
     # run the command, send output to out file
     "$@" >> ${DRILL_RELEASE_OUTFILE} 2>&1
     if [ $? -ne 0 ]; then
@@ -42,11 +43,12 @@ function runCmd(){
 }
 
 function copyFiles(){
-    rm -rf ${LOCAL_RELEASE_STAGING_DIR}
-    mkdir -p ${LOCAL_RELEASE_STAGING_DIR}/${DRILL_RELEASE_VERSION}
-    cp ${DRILL_SRC}/target/apache-drill-${DRILL_RELEASE_VERSION}-src.tar.gz* ${LOCAL_RELEASE_STAGING_DIR}/${DRILL_RELEASE_VERSION}/ && \
-    cp ${DRILL_SRC}/target/apache-drill-${DRILL_RELEASE_VERSION}-src.zip* ${LOCAL_RELEASE_STAGING_DIR}/${DRILL_RELEASE_VERSION}/ && \
-    cp ${DRILL_SRC}/distribution/target/apache-drill-${DRILL_RELEASE_VERSION}.tar.gz* ${LOCAL_RELEASE_STAGING_DIR}/${DRILL_RELEASE_VERSION}/
+		target_dir=${APACHE_DIST_WORKING_COPY}/${DRILL_RELEASE_VERSION}-rc${RELEASE_ATTEMPT}
+    rm -rf $target_dir
+    mkdir -p $target_dir
+    cp ${DRILL_SRC}/target/apache-drill-${DRILL_RELEASE_VERSION}-src.tar.gz* $target_dir/ && \
+    cp ${DRILL_SRC}/target/apache-drill-${DRILL_RELEASE_VERSION}-src.zip* $target_dir/ && \
+    cp ${DRILL_SRC}/distribution/target/apache-drill-${DRILL_RELEASE_VERSION}.tar.gz* $target_dir/
 
 }
 
@@ -69,12 +71,17 @@ function createDirectoryIfAbsent() {
 
 function readInputAndSetup(){
 
+    read -p "JAVA_HOME of the JDK 8 to use for the release : " JAVA_HOME
+    export JAVA_HOME
+
     read -p "Drill Working Directory : " WORK_DIR
     createDirectoryIfAbsent "${WORK_DIR}"
 
-    read -p "Drill Release Version (eg. 1.4.0) : " DRILL_RELEASE_VERSION
+    read -p "Build profile (e.g. hadoop-2, blank for default) : " BUILD_PROFILE
+
+    read -p "Drill Release Version (e.g. 1.4.0, 1.20.0-hadoop2) : " DRILL_RELEASE_VERSION
 
-    read -p "Drill Development Version (eg. 1.5.0-SNAPSHOT) : " DRILL_DEV_VERSION
+    read -p "Drill Development Version (e.g. 1.5.0-SNAPSHOT) : " DRILL_DEV_VERSION
 
     read -p "Release Commit SHA : " RELEASE_COMMIT_SHA
 
@@ -83,8 +90,7 @@ function readInputAndSetup(){
 
     read -p "Staging (personal) repo : " MY_REPO
 
-    read -p "Local release staging directory : " LOCAL_RELEASE_STAGING_DIR
-    createDirectoryIfAbsent "${LOCAL_RELEASE_STAGING_DIR}"
+    read -p "Svn working copy of dist.apache.org/repos/dist/dev/drill : " APACHE_DIST_WORKING_COPY
 
     read -p "Apache login : " APACHE_LOGIN
 
@@ -94,17 +100,20 @@ function readInputAndSetup(){
 
     DRILL_RELEASE_OUTFILE="${DRILL_RELEASE_OUTDIR}/drill_release.out.txt"
     DRILL_SRC=${WORK_DIR}/drill-release
+    [ -z "$BUILD_PROFILE" ] || BUILD_PROFILE="-P$BUILD_PROFILE"
 
     echo ""
     echo "-----------------"
+    echo "JAVA_HOME : " ${JAVA_HOME}
     echo "Drill Working Directory : " ${WORK_DIR}
     echo "Drill Src Directory : " ${DRILL_SRC}
+    echo "Build profile mvn arg: " ${BUILD_PROFILE}
     echo "Drill Release Version : " ${DRILL_RELEASE_VERSION}
     echo "Drill Development Version : " ${DRILL_DEV_VERSION}
     echo "Release Commit SHA : " ${RELEASE_COMMIT_SHA}
     echo "Write output to : " ${DRILL_RELEASE_OUTFILE}
     echo "Staging (personal) repo : " ${MY_REPO}
-    echo "Local release staging dir : " ${LOCAL_RELEASE_STAGING_DIR}
+    echo "Svn working copy of dist.apache.org/repos/dist dir : " ${APACHE_DIST_WORKING_COPY}
 
     touch ${DRILL_RELEASE_OUTFILE}
 }
@@ -142,43 +151,45 @@ runCmd "Cloning the repo" cloneRepo
 runCmd "Checking the build" mvn install -DskipTests
 
 export MAVEN_OPTS=-Xmx2g
-runCmd "Clearing release history" mvn release:clean -Papache-release -DpushChanges=false -DskipTests
+runCmd "Clearing release history" mvn release:clean \
+  -Papache-release \
+  -DpushChanges=false \
+  -DskipTests
 
 export MAVEN_OPTS='-Xmx4g -XX:MaxPermSize=512m'
-runCmd "Preparing the release " mvn -X release:prepare -Papache-release -DpushChanges=false -DskipTests -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE}  -DskipTests=true -Dmaven.javadoc.skip=false" -DreleaseVersion=${DRILL_RELEASE_VERSION} -DdevelopmentVersion=${DRILL_DEV_VERSION} -Dtag=drill-${DRILL_RELEASE_VERSION}
+runCmd "Preparing the release " mvn -X release:prepare \
+  -Papache-release \
+  -DpushChanges=false \
+  -DdevelopmentVersion=${DRILL_DEV_VERSION} \
+  -DreleaseVersion=${DRILL_RELEASE_VERSION} \
+  -Dtag=drill-${DRILL_RELEASE_VERSION} \
+  -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests -Dmaven.javadoc.skip=false ${BUILD_PROFILE}"
 
 runCmd "Pushing to private repo ${MY_REPO}" git push ${MY_REPO} drill-${DRILL_RELEASE_VERSION}
 
-runCmd "Performing the release to ${MY_REPO}" mvn release:perform -DconnectionUrl=scm:git:${MY_REPO} -DskipTests -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests=true -DconnectionUrl=scm:git:${MY_REPO}"
+runCmd "Performing the release to ${MY_REPO}" mvn release:perform \
+  -DconnectionUrl=scm:git:${MY_REPO} \
+  -DlocalCheckout=true \
+  -Darguments="-Dgpg.passphrase=${GPG_PASSPHRASE} -DskipTests ${BUILD_PROFILE}"
 
 runCmd "Checking out release commit" git checkout drill-${DRILL_RELEASE_VERSION}
 
 # Remove surrounding quotes
 tempGPG_PASSPHRASE="${GPG_PASSPHRASE%\"}"
 tempGPG_PASSPHRASE="${tempGPG_PASSPHRASE#\"}"
-runCmd "Deploying ..." mvn deploy -Papache-release -DskipTests -Dgpg.passphrase="${tempGPG_PASSPHRASE}"
-
-runCmd "Copying" copyFiles
+runCmd "Deploying ..." mvn deploy \
+  -Papache-release \
+  -DskipTests \
+  -Dgpg.passphrase="${tempGPG_PASSPHRASE}"
 
 runCmd "Verifying artifacts are signed correctly" ${CHKSMDIR}/checksum.sh ${DRILL_SRC}/distribution/target/apache-drill-${DRILL_RELEASE_VERSION}.tar.gz
 pause
 
-runCmd "Copy release files to home.apache.org" sftp ${APACHE_LOGIN}@home.apache.org <<EOF
-  mkdir public_html
-  cd public_html
-  mkdir drill
-  cd drill
-  mkdir releases
-  cd releases
-  mkdir ${DRILL_RELEASE_VERSION}
-  cd ${DRILL_RELEASE_VERSION}
-  mkdir rc${RELEASE_ATTEMPT}
-  cd rc${RELEASE_ATTEMPT}
-  put ${LOCAL_RELEASE_STAGING_DIR}/${DRILL_RELEASE_VERSION}/* .
-  bye
-EOF
+runCmd "Copying release files to your working copy of dist.apache.org/repos/dist/dev/drill" copyFiles
 
 echo "Go to the Apache maven staging repo and close the new jar release"
+echo "and go to ${APACHE_DIST_WORKING_COPY} and svn add/commit the new"
+echo "release candidate after checking the pending changes there."
 pause
 
 echo "Start the vote \(good luck\)\n"