You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/11/14 00:58:27 UTC

[GitHub] [hudi] pengzhiwei2018 commented on a change in pull request #2241: [HUDI-1384] Decoupling hive jdbc dependency when HIVE_USE_JDBC_OPT_KE…

pengzhiwei2018 commented on a change in pull request #2241:
URL: https://github.com/apache/hudi/pull/2241#discussion_r523305615



##########
File path: hudi-sync/hudi-hive-sync/src/main/java/org/apache/hudi/hive/HoodieHiveClient.java
##########
@@ -426,7 +415,7 @@ public CommandProcessorResponse updateHiveSQLUsingHiveDriver(String sql) {
   private void createHiveConnection() {
     if (connection == null) {
       try {
-        Class.forName(HiveDriver.class.getCanonicalName());
+        Class.forName("org.apache.hive.jdbc.HiveDriver");
       } catch (ClassNotFoundException e) {

Review comment:
       Thanks for you response  @wangxianghu . When we debug code with the `HIVE_USE_JDBC_OPT_KEY=false`  in IDE, we just include the **hudi-spark** dependency,the hive-jdbc is no need for us. More dependencies include may lead to more conflicts. And also IMO it is better to follow the minimization principle.If we do not use the hive jdbc, we should not include the dependency in the code.Even though the jdbc has packaged in the `hudi-spark-bundle`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org