You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by ta...@apache.org on 2020/09/01 01:17:29 UTC

[impala] branch master updated (827070b -> f85dbff)

This is an automated email from the ASF dual-hosted git repository.

tarmstrong pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git.


    from 827070b  IMPALA-10099: Push down DISTINCT in Set operations
     new 1cdae46  IMPALA-10118: Update shaded-deps/hive-exec/pom.xml for GenericHiveLexer
     new f85dbff  IMPALA-10030: Remove unnecessary jar dependencies

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 fe/pom.xml                                         | 98 +++++-----------------
 .../org/apache/impala/service/JniFrontend.java     | 21 ++---
 .../apache/impala/util/FsPermissionChecker.java    |  8 +-
 .../org/apache/impala/util/HdfsCachingUtil.java    |  7 +-
 .../org/apache/impala/service/JniFrontendTest.java | 19 +++--
 shaded-deps/hive-exec/pom.xml                      |  1 +
 6 files changed, 52 insertions(+), 102 deletions(-)


[impala] 02/02: IMPALA-10030: Remove unnecessary jar dependencies

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tarmstrong pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit f85dbff97618066d60f37736808c8c24aa0a98e5
Author: Sahil Takiar <ta...@gmail.com>
AuthorDate: Wed Jul 29 17:54:11 2020 -0700

    IMPALA-10030: Remove unnecessary jar dependencies
    
    Remove the dependency on hadoop-hdfs, this jar file contains the core
    code for implementing HDFS, and thus pulls in a bunch of unnecessary
    transitive dependencies. Impala currently only requires this jar for
    some configuration key names. Most of these configuration key names have
    been moved to the appropriate HDFS client jars, and some others are
    deprecated altogether. Removing this jar required making a few code
    changes to move the location of the referenced configuration keys.
    
    Removes all transitive Kafka dependencies from the Apache Ranger
    dependency. Previously, Impala only excluded Kafka jars with binary
    version kafka_2.11, however, it seems the Ranger recently upgraded the
    dependency version to kafka_2.12. Now all Kafka dependencies are
    excluded, regardless of artifact name.
    
    Removes all transitive dependencies from the Apache Ozone dependency.
    Impala has a dependency on the Ozone client shaded-jar, which already
    includes all required transitive dependencies. For some reason, Ozone
    still pulls in some transitive dependencies even though they are not
    needed.
    
    Made some other minor cleanup / improvements in the fe/pom.xml file.
    
    This saves about 70 MB of space in the Docker images.
    
    Testing:
    * Ran exhaustive tests
    * Ran on-prem cluster E2E tests
    
    Change-Id: Iadbb6142466f73f067dd7cf9d401ff81145c74cc
    Reviewed-on: http://gerrit.cloudera.org:8080/16311
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 fe/pom.xml                                         | 98 +++++-----------------
 .../org/apache/impala/service/JniFrontend.java     | 21 ++---
 .../apache/impala/util/FsPermissionChecker.java    |  8 +-
 .../org/apache/impala/util/HdfsCachingUtil.java    |  7 +-
 .../org/apache/impala/service/JniFrontendTest.java | 19 +++--
 5 files changed, 51 insertions(+), 102 deletions(-)

diff --git a/fe/pom.xml b/fe/pom.xml
index 061a739..6ea5c8a 100644
--- a/fe/pom.xml
+++ b/fe/pom.xml
@@ -35,14 +35,6 @@ under the License.
   <name>Apache Impala Query Engine Frontend</name>
 
   <dependencies>
-    <!-- Force json-smart dependency.
-         See https://issues.apache.org/jira/browse/HADOOP-14903 -->
-    <dependency>
-      <groupId>net.minidev</groupId>
-      <artifactId>json-smart</artifactId>
-      <version>2.3</version>
-    </dependency>
-
     <dependency>
       <groupId>org.apache.impala</groupId>
       <artifactId>query-event-hook-api</artifactId>
@@ -54,46 +46,18 @@ under the License.
       <artifactId>impala-data-source-api</artifactId>
       <version>${impala.extdatasrc.api.version}</version>
     </dependency>
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-hdfs</artifactId>
-      <version>${hadoop.version}</version>
-      <exclusions>
-        <exclusion>
-          <groupId>org.eclipse.jetty</groupId>
-          <artifactId>*</artifactId>
-        </exclusion>
-        <exclusion>
-          <!-- IMPALA-9108: Avoid pulling in leveldbjni, which is unneeded. -->
-          <groupId>org.fusesource.leveldbjni</groupId>
-          <artifactId>*</artifactId>
-        </exclusion>
-        <exclusion>
-          <!-- IMPALA-9468: Avoid pulling in netty for security reasons -->
-          <groupId>io.netty</groupId>
-          <artifactId>*</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>com.sun.jersey</groupId>
-          <artifactId>jersey-server</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
+
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-hdfs-client</artifactId>
       <version>${hadoop.version}</version>
     </dependency>
+
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-common</artifactId>
       <version>${hadoop.version}</version>
       <exclusions>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
         <exclusion>
           <groupId>org.eclipse.jetty</groupId>
           <artifactId>*</artifactId>
@@ -113,17 +77,11 @@ under the License.
         </exclusion>
       </exclusions>
     </dependency>
+
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-auth</artifactId>
       <version>${hadoop.version}</version>
-      <exclusions>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-      </exclusions>
     </dependency>
 
     <dependency>
@@ -162,13 +120,6 @@ under the License.
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-azure-datalake</artifactId>
       <version>${hadoop.version}</version>
-      <exclusions>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-      </exclusions>
     </dependency>
 
     <dependency>
@@ -218,7 +169,7 @@ under the License.
       <exclusions>
         <exclusion>
           <groupId>org.apache.kafka</groupId>
-          <artifactId>kafka_2.11</artifactId>
+          <artifactId>*</artifactId>
         </exclusion>
         <exclusion>
           <groupId>org.apache.shiro</groupId>
@@ -230,12 +181,14 @@ under the License.
         </exclusion>
       </exclusions>
     </dependency>
-    <!-- this is needed by ranger-plugins-audit -->
+
+    <!-- This is needed by ranger-plugins-audit -->
     <dependency>
       <groupId>javax.mail</groupId>
       <artifactId>mail</artifactId>
       <version>1.4</version>
     </dependency>
+
     <dependency>
       <groupId>javax.ws.rs</groupId>
       <artifactId>javax.ws.rs-api</artifactId>
@@ -290,26 +243,12 @@ under the License.
       <groupId>org.apache.hbase</groupId>
       <artifactId>hbase-client</artifactId>
       <version>${hbase.version}</version>
-       <exclusions>
-         <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-         <exclusion>
-           <groupId>net.minidev</groupId>
-           <artifactId>json-smart</artifactId>
-         </exclusion>
-      </exclusions>
     </dependency>
 
     <dependency>
       <groupId>org.apache.hbase</groupId>
       <artifactId>hbase-common</artifactId>
       <version>${hbase.version}</version>
-       <exclusions>
-         <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-         <exclusion>
-           <groupId>net.minidev</groupId>
-           <artifactId>json-smart</artifactId>
-         </exclusion>
-      </exclusions>
     </dependency>
 
     <dependency>
@@ -382,6 +321,7 @@ under the License.
       <artifactId>slf4j-api</artifactId>
       <version>${slf4j.version}</version>
     </dependency>
+
     <dependency>
       <groupId>org.slf4j</groupId>
       <artifactId>slf4j-log4j12</artifactId>
@@ -401,9 +341,9 @@ under the License.
     </dependency>
 
     <dependency>
-        <groupId>com.google.errorprone</groupId>
-        <artifactId>error_prone_annotations</artifactId>
-        <version>2.3.1</version>
+      <groupId>com.google.errorprone</groupId>
+      <artifactId>error_prone_annotations</artifactId>
+      <version>2.3.1</version>
     </dependency>
 
     <dependency>
@@ -424,6 +364,7 @@ under the License.
       <artifactId>json-simple</artifactId>
       <version>1.1.1</version>
     </dependency>
+
     <dependency>
       <groupId>org.glassfish</groupId>
       <artifactId>javax.json</artifactId>
@@ -979,11 +920,6 @@ under the License.
               <groupId>org.apache.logging.log4j</groupId>
               <artifactId>log4j-1.2-api</artifactId>
             </exclusion>
-            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-            <exclusion>
-              <groupId>net.minidev</groupId>
-              <artifactId>json-smart</artifactId>
-            </exclusion>
             <exclusion>
               <groupId>org.apache.hive</groupId>
               <artifactId>hive-serde</artifactId>
@@ -1035,6 +971,16 @@ under the License.
           <groupId>org.apache.hadoop</groupId>
           <artifactId>hadoop-ozone-filesystem-hadoop3</artifactId>
           <version>${ozone.version}</version>
+          <!-- Remove all transitive dependencies from the Apache Ozone dependency.
+          hadoop-ozone-filesystem-hadoop3 is a shaded-jar, which already includes
+          all required transitive dependencies. For some reason, Ozone still pulls
+          in some transitive dependencies even though they are not needed. -->
+          <exclusions>
+            <exclusion>
+              <groupId>*</groupId>
+              <artifactId>*</artifactId>
+            </exclusion>
+          </exclusions>
         </dependency>
       </dependencies>
     </profile>
diff --git a/fe/src/main/java/org/apache/impala/service/JniFrontend.java b/fe/src/main/java/org/apache/impala/service/JniFrontend.java
index 3642429..324105d 100644
--- a/fe/src/main/java/org/apache/impala/service/JniFrontend.java
+++ b/fe/src/main/java/org/apache/impala/service/JniFrontend.java
@@ -30,7 +30,7 @@ import org.apache.hadoop.fs.adl.AdlFileSystem;
 import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
 import org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem;
 import org.apache.hadoop.fs.s3a.S3AFileSystem;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.security.Groups;
 import org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
@@ -753,8 +753,8 @@ public class JniFrontend {
    */
   @VisibleForTesting
   protected static String checkShortCircuitRead(Configuration conf) {
-    if (!conf.getBoolean(DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_KEY,
-        DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_DEFAULT)) {
+    if (!conf.getBoolean(HdfsClientConfigKeys.Read.ShortCircuit.KEY,
+        HdfsClientConfigKeys.Read.ShortCircuit.DEFAULT)) {
       LOG.info("Short-circuit reads are not enabled.");
       return "";
     }
@@ -765,11 +765,12 @@ public class JniFrontend {
     StringBuilder errorCause = new StringBuilder();
 
     // dfs.domain.socket.path must be set properly
-    String domainSocketPath = conf.getTrimmed(DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY,
-        DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_DEFAULT);
+    String domainSocketPath =
+        conf.getTrimmed(HdfsClientConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY,
+            HdfsClientConfigKeys.DFS_DOMAIN_SOCKET_PATH_DEFAULT);
     if (domainSocketPath.isEmpty()) {
       errorCause.append(prefix);
-      errorCause.append(DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY);
+      errorCause.append(HdfsClientConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY);
       errorCause.append(" is not configured.\n");
     } else {
       // The socket path parent directory must be readable and executable.
@@ -781,16 +782,16 @@ public class JniFrontend {
       } else if (socketDir == null || !socketDir.canRead() || !socketDir.canExecute()) {
         errorCause.append(prefix);
         errorCause.append("Impala cannot read or execute the parent directory of ");
-        errorCause.append(DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY);
+        errorCause.append(HdfsClientConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY);
         errorCause.append("\n");
       }
     }
 
     // dfs.client.use.legacy.blockreader.local must be set to false
-    if (conf.getBoolean(DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL,
-        DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL_DEFAULT)) {
+    if (conf.getBoolean(HdfsClientConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL,
+        HdfsClientConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL_DEFAULT)) {
       errorCause.append(prefix);
-      errorCause.append(DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL);
+      errorCause.append(HdfsClientConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL);
       errorCause.append(" should not be enabled.\n");
     }
 
diff --git a/fe/src/main/java/org/apache/impala/util/FsPermissionChecker.java b/fe/src/main/java/org/apache/impala/util/FsPermissionChecker.java
index db5c555..4970043 100644
--- a/fe/src/main/java/org/apache/impala/util/FsPermissionChecker.java
+++ b/fe/src/main/java/org/apache/impala/util/FsPermissionChecker.java
@@ -39,8 +39,7 @@ import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.hdfs.protocol.AclException;
-import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT;
-import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY;
+import static org.apache.hadoop.hdfs.client.HdfsClientConfigKeys.DeprecatedKeys.DFS_PERMISSIONS_SUPERUSERGROUP_KEY;
 
 import com.google.common.base.Preconditions;
 import com.google.common.collect.ImmutableList;
@@ -72,8 +71,9 @@ public class FsPermissionChecker {
   private FsPermissionChecker() throws IOException {
     UserGroupInformation ugi = UserGroupInformation.getCurrentUser();
     groups_.addAll(Arrays.asList(ugi.getGroupNames()));
-    supergroup_ = CONF.get(DFS_PERMISSIONS_SUPERUSERGROUP_KEY,
-        DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT);
+    // The default value is taken from the String DFS_PERMISSIONS_SUPERUSERGROUP_DEFAULT
+    // in DFSConfigKeys.java from the hadoop-hdfs jar.
+    supergroup_ = CONF.get(DFS_PERMISSIONS_SUPERUSERGROUP_KEY, "supergroup");
     user_ = ugi.getShortUserName();
   }
 
diff --git a/fe/src/main/java/org/apache/impala/util/HdfsCachingUtil.java b/fe/src/main/java/org/apache/impala/util/HdfsCachingUtil.java
index 1a0c1d9..b22c2f8 100644
--- a/fe/src/main/java/org/apache/impala/util/HdfsCachingUtil.java
+++ b/fe/src/main/java/org/apache/impala/util/HdfsCachingUtil.java
@@ -22,7 +22,6 @@ import java.util.Map;
 
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.RemoteIterator;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DistributedFileSystem;
 import org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry;
 import org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo;
@@ -265,9 +264,11 @@ public class HdfsCachingUtil {
 
     // The refresh interval is how often HDFS will update cache directive stats. We use
     // this value to determine how frequently we should poll for changes.
+    // The key dfs.namenode.path.based.cache.refresh.interval.ms is copied from the string
+    // DFS_NAMENODE_PATH_BASED_CACHE_REFRESH_INTERVAL_MS in DFSConfigKeys.java from the
+    // hadoop-hdfs jar.
     long hdfsRefreshIntervalMs = getDfs().getConf().getLong(
-        DFSConfigKeys.DFS_NAMENODE_PATH_BASED_CACHE_REFRESH_INTERVAL_MS,
-        DFSConfigKeys.DFS_NAMENODE_PATH_BASED_CACHE_REFRESH_INTERVAL_MS_DEFAULT);
+        "dfs.namenode.path.based.cache.refresh.interval.ms", 30000L);
     Preconditions.checkState(hdfsRefreshIntervalMs > 0);
 
     // Loop until either MAX_UNCHANGED_CACHING_REFRESH_INTERVALS have passed with no
diff --git a/fe/src/test/java/org/apache/impala/service/JniFrontendTest.java b/fe/src/test/java/org/apache/impala/service/JniFrontendTest.java
index 26752d9..771b1c7 100644
--- a/fe/src/test/java/org/apache/impala/service/JniFrontendTest.java
+++ b/fe/src/test/java/org/apache/impala/service/JniFrontendTest.java
@@ -27,7 +27,7 @@ import java.util.Random;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
-import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
 import org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
 import org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMappingWithFallback;
 import org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
@@ -96,13 +96,14 @@ public class JniFrontendTest {
     socketDir.getParentFile().setExecutable(false);
 
     Configuration conf = mock(Configuration.class);
-    when(conf.getBoolean(DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_KEY,
-        DFSConfigKeys.DFS_CLIENT_READ_SHORTCIRCUIT_DEFAULT)).thenReturn(true);
-    when(conf.getTrimmed(DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY,
-        DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_DEFAULT))
+    when(conf.getBoolean(HdfsClientConfigKeys.Read.ShortCircuit.KEY,
+        HdfsClientConfigKeys.Read.ShortCircuit.DEFAULT)).thenReturn(true);
+    when(conf.getTrimmed(HdfsClientConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY,
+        HdfsClientConfigKeys.DFS_DOMAIN_SOCKET_PATH_DEFAULT))
         .thenReturn(socketDir.getAbsolutePath());
-    when(conf.getBoolean(DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL,
-        DFSConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL_DEFAULT)).thenReturn(false);
+    when(conf.getBoolean(HdfsClientConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL,
+             HdfsClientConfigKeys.DFS_CLIENT_USE_LEGACY_BLOCKREADERLOCAL_DEFAULT))
+        .thenReturn(false);
     BackendConfig.INSTANCE = mock(BackendConfig.class);
 
     when(BackendConfig.INSTANCE.isDedicatedCoordinator()).thenReturn(true);
@@ -113,7 +114,7 @@ public class JniFrontendTest {
     actualErrorMessage = JniFrontend.checkShortCircuitRead(conf);
     assertEquals("Invalid short-circuit reads configuration:\n"
         + "  - Impala cannot read or execute the parent directory of "
-        + DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY + "\n",
+        + HdfsClientConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY + "\n",
         actualErrorMessage);
 
     if (socketDir != null) {
@@ -122,4 +123,4 @@ public class JniFrontendTest {
       socketDir.getParentFile().delete();
     }
   }
-}
\ No newline at end of file
+}


[impala] 01/02: IMPALA-10118: Update shaded-deps/hive-exec/pom.xml for GenericHiveLexer

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tarmstrong pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 1cdae465b7f756d5e9e99d7512501c14eab7d081
Author: Fang-Yu Rao <fa...@cloudera.com>
AuthorDate: Mon Aug 31 11:19:07 2020 -0700

    IMPALA-10118: Update shaded-deps/hive-exec/pom.xml for GenericHiveLexer
    
    In HIVE-19064 the class of GenericHiveLexer was introduced as an
    intermediate class between the classes of HiveLexer and Lexer. In order
    for ToSqlUtils.java to be compiled once we bump up CDP_BUILD_NUMBER that
    includes this change on the Hive side, this patch updates
    shaded-deps/hive-exec/pom.xml to include the jar of GenericHiveLexer so
    that Impala could be successfully built.
    
    Testing:
     - Verified that Impala could compile in a local development
       environment after applying this patch.
    
    Change-Id: I27db1cb8de36dd86bae08b7177ae3f1c156d73bc
    Reviewed-on: http://gerrit.cloudera.org:8080/16390
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 shaded-deps/hive-exec/pom.xml | 1 +
 1 file changed, 1 insertion(+)

diff --git a/shaded-deps/hive-exec/pom.xml b/shaded-deps/hive-exec/pom.xml
index eadc397..43be1a0 100644
--- a/shaded-deps/hive-exec/pom.xml
+++ b/shaded-deps/hive-exec/pom.xml
@@ -82,6 +82,7 @@ the same dependencies
                 <!-- Needed to support Hive udfs -->
                 <include>org/apache/hadoop/hive/ql/exec/*UDF*</include>
                 <include>org/apache/hadoop/hive/ql/exec/FunctionUtils*</include>
+                <include>org/apache/hadoop/hive/ql/parse/GenericHiveLexer*</include>
                 <include>org/apache/hadoop/hive/ql/parse/HiveLexer*</include>
                 <include>org/apache/hadoop/hive/ql/udf/**/*</include>
                 <!-- Many of the UDFs are annotated with their vectorized counter-parts.