You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@dolphinscheduler.apache.org by GitBox <gi...@apache.org> on 2022/01/07 03:22:24 UTC

[GitHub] [dolphinscheduler] caiiansheng opened a new issue #7868: [Bug] [DataSource] Add hive datasource failed

caiiansheng opened a new issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868


   ### Search before asking
   
   - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
   
   
   ### What happened
   
   2.0.2版本 添加hive数据源是,如果hive server 版本是 2.x.x ,可以添加成功,但是连接3.x.x 版本的hive ,添加不成功,hive 集群没有开启kerberos,具体错误如下
   ![image](https://user-images.githubusercontent.com/26760483/148482116-2cc24b1f-833f-4009-b2f7-f6fa8ecfcd14.png)
   对应的日志:
   ![image](https://user-images.githubusercontent.com/26760483/148482860-ea89935b-78c7-441e-9a37-f8c9dadbba7b.png)
   ![image](https://user-images.githubusercontent.com/26760483/148482915-3b1216dc-fec6-4ad8-95dc-5bb9505e998e.png)
    如果添加{“auth“:"noSasl"} 参数,报了另一个错误:
   ![image](https://user-images.githubusercontent.com/26760483/148485734-f98e89e9-cb04-4223-b6a7-fa1f2e6f345e.png)
   
   在2.0.0版本 连接hive3 ,加上{“auth“:"noSasl"} 是可以连接成功的
   
   
   @SbloodyS 上次你说解决了这个问题,帮忙看看怎么回事哈 谢谢
   
   ### What you expected to happen
   
   add new hive datasource success.
   
   ### How to reproduce
   
   add new hive datasource success.
   
   ### Anything else
   
   The bug occurred in 2.0.2 release, and there was no problem with 2.0 release
   
   ### Version
   
   dev
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] SbloodyS edited a comment on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
SbloodyS edited a comment on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007199433


   I recompiled 2.0.2-release from the official website.And I can successfully connect to hive 3.1.2 with LDAP authentication.
   
   Could you please check if there is hive*.jar in ${dolphinscheduler_home}/lib?@caiiansheng


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] SbloodyS commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
SbloodyS commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007219153


   <?xml version="1.0"?>
   <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
   
   <!-- Licensed to the Apache Software Foundation (ASF) under one or more       -->
   <!-- contributor license agreements.  See the NOTICE file distributed with    -->
   <!-- this work for additional information regarding copyright ownership.      -->
   <!-- The ASF licenses this file to You under the Apache License, Version 2.0  -->
   <!-- (the "License"); you may not use this file except in compliance with     -->
   <!-- the License.  You may obtain a copy of the License at                    -->
   <!--                                                                          -->
   <!--     http://www.apache.org/licenses/LICENSE-2.0                           -->
   <!--                                                                          -->
   <!-- Unless required by applicable law or agreed to in writing, software      -->
   <!-- distributed under the License is distributed on an "AS IS" BASIS,        -->
   <!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
   <!-- See the License for the specific language governing permissions and      -->
   <!-- limitations under the License.                                           -->
   
   <configuration>
   
   <!-- Hive Configuration can either be stored in this file or in the hadoop configuration files  -->
   <!-- that are implied by Hadoop setup variables.                                                -->
   <!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive    -->
   <!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
   <!-- resource).                                                                                 -->
   
   <!-- Hive Execution Parameters -->
   
   <property>
     <name>hbase.master</name>
     <value></value>
     <description>http://wiki.apache.org/hadoop/Hive/HBaseIntegration</description>
   </property>
   
   <property>
     <name>hive.zookeeper.quorum</name>
     <value>ip-192-168-24-255.us-west-2.compute.internal:2181</value>
   </property>
   
   <property>
     <name>hive.llap.zk.sm.connectionString</name>
     <value>ip-192-168-24-255.us-west-2.compute.internal:2181</value>
   </property>
   
   <property>
     <name>hbase.zookeeper.quorum</name>
     <value>ip-192-168-24-255.us-west-2.compute.internal</value>
     <description>http://wiki.apache.org/hadoop/Hive/HBaseIntegration</description>
   </property>
   
   <property>
     <name>hive.execution.engine</name>
     <value>mr</value>
   </property>
   
     <property>
       <name>fs.defaultFS</name>
       <value>hdfs://ip-192-168-24-255.us-west-2.compute.internal:8020</value>
     </property>
   
   
     <property>
       <name>hive.metastore.uris</name>
       <value>thrift://ip-192-168-24-255.us-west-2.compute.internal:9083</value>
       <description>JDBC connect string for a JDBC metastore</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionURL</name>
       <value>*</value>
       <description>username to use against metastore database</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionDriverName</name>
       <value>org.mariadb.jdbc.Driver</value>
       <description>username to use against metastore database</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionUserName</name>
       <value>*</value>
       <description>username to use against metastore database</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionPassword</name>
       <value>*</value>
       <description>password to use against metastore database</description>
     </property>
   
   <property>
      <name>hive.server2.allow.user.substitution</name>
      <value>true</value>
   </property>
   
   <property>
      <name>hive.server2.enable.doAs</name>
      <value>true</value>
   </property>
   
   <property>
      <name>hive.server2.thrift.port</name>
      <value>10000</value>
   </property>
   
   <property>
      <name>hive.server2.thrift.http.port</name>
      <value>10001</value>
   </property>
   
   
   
   <property>
     <name>hive.optimize.ppd.input.formats</name>
     <value>com.amazonaws.emr.s3select.hive.S3SelectableTextInputFormat</value>
   </property>
   
   <property>
     <name>s3select.filter</name>
     <value>false</value>
   </property>
   
   <property>
       <name>hive.server2.in.place.progress</name>
       <value>true</value>
   </property>
   
   <property>
       <name>hive.llap.zk.registry.user</name>
       <value>hadoop</value>
   </property>
   
   <property>
       <name>hive.security.metastore.authorization.manager</name>
       <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
   </property>
   
     <property>
       <name>datanucleus.fixedDatastore</name>
       <value>true</value>
     </property>
   
     <property>
       <name>mapred.reduce.tasks</name>
       <value>-1</value>
     </property>
   
     <property>
       <name>mapred.max.split.size</name>
       <value>256000000</value>
     </property>
   
     <property>
       <name>hive.mapjoin.hybridgrace.hashtable</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.merge.nway.joins</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.metastore.connect.retries</name>
       <value>15</value>
     </property>
   
     <property>
       <name>hive.optimize.sort.dynamic.partition</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.tez.auto.reducer.parallelism</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.vectorized.execution.mapjoin.minmax.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.optimize.dynamic.partition.hashjoin</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.compactor.initiator.on</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.llap.daemon.service.hosts</name>
       <value>@llap0</value>
     </property>
   
     <property>
       <name>hive.llap.execution.mode</name>
       <value>only</value>
     </property>
   
     <property>
       <name>hive.optimize.metadataonly</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.tez.bucket.pruning</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.exec.mode.local.auto</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.exec.mode.local.auto.inputbytes.max</name>
       <value>50000000</value>
     </property>
   
     <property>
       <name>hive.query.reexecution.stats.persist.scope</name>
       <value>hiveserver</value>
     </property>
   
     <property>
       <name>hive.server2.authentication.ldap.baseDN</name>
       <value>*</value>
     </property>
   
     <property>
       <name>mapreduce.map.speculative</name>
       <value>false</value>
     </property>
   
     <property>
       <name>mapred.map.tasks.speculative.execution</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.mapred.mode</name>
       <value>nostrict</value>
     </property>
   
     <property>
       <name>hive.merge.sparkfiles</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.merge.size.per.task</name>
       <value>268435456</value>
     </property>
   
     <property>
       <name>hive.blobstore.optimizations.enabled</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.merge.smallfiles.avgsize</name>
       <value>268435456</value>
     </property>
   
     <property>
       <name>hive.server2.authentication</name>
       <value>LDAP</value>
     </property>
   
     <property>
       <name>spark.yarn.jars</name>
       <value>hdfs:///spark-jars/*</value>
     </property>
   
     <property>
       <name>hive.merge.mapredfiles</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.server2.builtin.udf.whitelist</name>
       <value> </value>
     </property>
   
     <property>
       <name>spark.eventLog.dir</name>
       <value>hdfs:///var/log/spark/apps</value>
     </property>
   
     <property>
       <name>spark.serializer</name>
       <value>org.apache.spark.serializer.KryoSerializer</value>
     </property>
   
     <property>
       <name>hive.security.authorization.sqlstd.confwhitelist.append</name>
       <value>hive.input.format|spark.*|mapred.*|tez.*|hive.*</value>
     </property>
   
     <property>
       <name>hive.metastore.warehouse.dir</name>
       <value>s3://bi-bigdata/usr/hive/warehouse</value>
     </property>
   
     <property>
       <name>hive.server2.authentication.ldap.url</name>
       <value>ldap://192.168.25.200:10389</value>
     </property>
   
     <property>
       <name>spark.executor.memory</name>
       <value>11200M</value>
     </property>
   
     <property>
       <name>spark.driver.memory</name>
       <value>2048M</value>
     </property>
   
     <property>
       <name>spark.driver.cores</name>
       <value>2</value>
     </property>
   
     <property>
       <name>spark.master</name>
       <value>yarn</value>
     </property>
   
     <property>
       <name>spark.yarn.executor.memoryOverhead</name>
       <value>2700M</value>
     </property>
   
     <property>
       <name>spark.executor.instances</name>
       <value>30</value>
     </property>
   
     <property>
       <name>mapred.reduce.tasks.speculative.execution</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.strict.checks.cartesian.product</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.auto.convert.join</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.server2.session.check.interval</name>
       <value>1800000</value>
     </property>
   
     <property>
       <name>spark.executor.cores</name>
       <value>4</value>
     </property>
   
     <property>
       <name>hive.server2.idle.operation.timeout</name>
       <value>7200000</value>
     </property>
   
     <property>
       <name>hive.merge.mapfiles</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.mapred.reduce.tasks.speculative.execution</name>
       <value>false</value>
     </property>
   
     <property>
       <name>spark.eventLog.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>spark.shuffle.service.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.server2.builtin.udf.blacklist</name>
       <value>empty_blacklist</value>
     </property>
   
     <property>
       <name>spark.executor.extraJavaOptions</name>
       <value>-Dlog4j.ignoreTCL=true -Dfile.encoding=utf-8</value>
     </property>
   
     <property>
       <name>mapreduce.reduce.speculative</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.server2.idle.session.timeout</name>
       <value>1800000</value>
     </property>
   
     <property>
       <name>spark.dynamicAllocation.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.auto.convert.join.noconditionaltask.size</name>
       <value>1251999744</value>
     </property>
   
     <property>
       <name>hive.stats.fetch.column.stats</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.compactor.worker.threads</name>
       <value>4</value>
     </property>
   
   </configuration>
   
   @caiiansheng 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] SbloodyS commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
SbloodyS commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007199433


   I recompiled 2.0.2-release from the official website.And I can successfully connect to hive 3.1.2 with LDAP authentication.
   
   Could you please check if there is hive*.jar in ${dolphinscheduler_home}/lib?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] SbloodyS commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
SbloodyS commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007199433






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] SbloodyS commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
SbloodyS commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007112340


   It seems like hive class not found error.I'll take a look at 2.0.2-release.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] SbloodyS edited a comment on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
SbloodyS edited a comment on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007219153


   ```
   <?xml version="1.0"?>
   <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
   
   <!-- Licensed to the Apache Software Foundation (ASF) under one or more       -->
   <!-- contributor license agreements.  See the NOTICE file distributed with    -->
   <!-- this work for additional information regarding copyright ownership.      -->
   <!-- The ASF licenses this file to You under the Apache License, Version 2.0  -->
   <!-- (the "License"); you may not use this file except in compliance with     -->
   <!-- the License.  You may obtain a copy of the License at                    -->
   <!--                                                                          -->
   <!--     http://www.apache.org/licenses/LICENSE-2.0                           -->
   <!--                                                                          -->
   <!-- Unless required by applicable law or agreed to in writing, software      -->
   <!-- distributed under the License is distributed on an "AS IS" BASIS,        -->
   <!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
   <!-- See the License for the specific language governing permissions and      -->
   <!-- limitations under the License.                                           -->
   
   <configuration>
   
   <!-- Hive Configuration can either be stored in this file or in the hadoop configuration files  -->
   <!-- that are implied by Hadoop setup variables.                                                -->
   <!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive    -->
   <!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
   <!-- resource).                                                                                 -->
   
   <!-- Hive Execution Parameters -->
   
   <property>
     <name>hbase.master</name>
     <value></value>
     <description>http://wiki.apache.org/hadoop/Hive/HBaseIntegration</description>
   </property>
   
   <property>
     <name>hive.zookeeper.quorum</name>
     <value>ip-192-168-24-255.us-west-2.compute.internal:2181</value>
   </property>
   
   <property>
     <name>hive.llap.zk.sm.connectionString</name>
     <value>ip-192-168-24-255.us-west-2.compute.internal:2181</value>
   </property>
   
   <property>
     <name>hbase.zookeeper.quorum</name>
     <value>ip-192-168-24-255.us-west-2.compute.internal</value>
     <description>http://wiki.apache.org/hadoop/Hive/HBaseIntegration</description>
   </property>
   
   <property>
     <name>hive.execution.engine</name>
     <value>mr</value>
   </property>
   
     <property>
       <name>fs.defaultFS</name>
       <value>hdfs://ip-192-168-24-255.us-west-2.compute.internal:8020</value>
     </property>
   
   
     <property>
       <name>hive.metastore.uris</name>
       <value>thrift://ip-192-168-24-255.us-west-2.compute.internal:9083</value>
       <description>JDBC connect string for a JDBC metastore</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionURL</name>
       <value>*</value>
       <description>username to use against metastore database</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionDriverName</name>
       <value>org.mariadb.jdbc.Driver</value>
       <description>username to use against metastore database</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionUserName</name>
       <value>*</value>
       <description>username to use against metastore database</description>
     </property>
   
     <property>
       <name>javax.jdo.option.ConnectionPassword</name>
       <value>*</value>
       <description>password to use against metastore database</description>
     </property>
   
   <property>
      <name>hive.server2.allow.user.substitution</name>
      <value>true</value>
   </property>
   
   <property>
      <name>hive.server2.enable.doAs</name>
      <value>true</value>
   </property>
   
   <property>
      <name>hive.server2.thrift.port</name>
      <value>10000</value>
   </property>
   
   <property>
      <name>hive.server2.thrift.http.port</name>
      <value>10001</value>
   </property>
   
   
   
   <property>
     <name>hive.optimize.ppd.input.formats</name>
     <value>com.amazonaws.emr.s3select.hive.S3SelectableTextInputFormat</value>
   </property>
   
   <property>
     <name>s3select.filter</name>
     <value>false</value>
   </property>
   
   <property>
       <name>hive.server2.in.place.progress</name>
       <value>true</value>
   </property>
   
   <property>
       <name>hive.llap.zk.registry.user</name>
       <value>hadoop</value>
   </property>
   
   <property>
       <name>hive.security.metastore.authorization.manager</name>
       <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
   </property>
   
     <property>
       <name>datanucleus.fixedDatastore</name>
       <value>true</value>
     </property>
   
     <property>
       <name>mapred.reduce.tasks</name>
       <value>-1</value>
     </property>
   
     <property>
       <name>mapred.max.split.size</name>
       <value>256000000</value>
     </property>
   
     <property>
       <name>hive.mapjoin.hybridgrace.hashtable</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.merge.nway.joins</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.metastore.connect.retries</name>
       <value>15</value>
     </property>
   
     <property>
       <name>hive.optimize.sort.dynamic.partition</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.tez.auto.reducer.parallelism</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.vectorized.execution.mapjoin.minmax.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.optimize.dynamic.partition.hashjoin</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.compactor.initiator.on</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.llap.daemon.service.hosts</name>
       <value>@llap0</value>
     </property>
   
     <property>
       <name>hive.llap.execution.mode</name>
       <value>only</value>
     </property>
   
     <property>
       <name>hive.optimize.metadataonly</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.tez.bucket.pruning</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.exec.mode.local.auto</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.exec.mode.local.auto.inputbytes.max</name>
       <value>50000000</value>
     </property>
   
     <property>
       <name>hive.query.reexecution.stats.persist.scope</name>
       <value>hiveserver</value>
     </property>
   
     <property>
       <name>hive.server2.authentication.ldap.baseDN</name>
       <value>*</value>
     </property>
   
     <property>
       <name>mapreduce.map.speculative</name>
       <value>false</value>
     </property>
   
     <property>
       <name>mapred.map.tasks.speculative.execution</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.mapred.mode</name>
       <value>nostrict</value>
     </property>
   
     <property>
       <name>hive.merge.sparkfiles</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.merge.size.per.task</name>
       <value>268435456</value>
     </property>
   
     <property>
       <name>hive.blobstore.optimizations.enabled</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.merge.smallfiles.avgsize</name>
       <value>268435456</value>
     </property>
   
     <property>
       <name>hive.server2.authentication</name>
       <value>LDAP</value>
     </property>
   
     <property>
       <name>spark.yarn.jars</name>
       <value>hdfs:///spark-jars/*</value>
     </property>
   
     <property>
       <name>hive.merge.mapredfiles</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.server2.builtin.udf.whitelist</name>
       <value> </value>
     </property>
   
     <property>
       <name>spark.eventLog.dir</name>
       <value>hdfs:///var/log/spark/apps</value>
     </property>
   
     <property>
       <name>spark.serializer</name>
       <value>org.apache.spark.serializer.KryoSerializer</value>
     </property>
   
     <property>
       <name>hive.security.authorization.sqlstd.confwhitelist.append</name>
       <value>hive.input.format|spark.*|mapred.*|tez.*|hive.*</value>
     </property>
   
     <property>
       <name>hive.metastore.warehouse.dir</name>
       <value>s3://bi-bigdata/usr/hive/warehouse</value>
     </property>
   
     <property>
       <name>hive.server2.authentication.ldap.url</name>
       <value>ldap://192.168.25.200:10389</value>
     </property>
   
     <property>
       <name>spark.executor.memory</name>
       <value>11200M</value>
     </property>
   
     <property>
       <name>spark.driver.memory</name>
       <value>2048M</value>
     </property>
   
     <property>
       <name>spark.driver.cores</name>
       <value>2</value>
     </property>
   
     <property>
       <name>spark.master</name>
       <value>yarn</value>
     </property>
   
     <property>
       <name>spark.yarn.executor.memoryOverhead</name>
       <value>2700M</value>
     </property>
   
     <property>
       <name>spark.executor.instances</name>
       <value>30</value>
     </property>
   
     <property>
       <name>mapred.reduce.tasks.speculative.execution</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.strict.checks.cartesian.product</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.auto.convert.join</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.server2.session.check.interval</name>
       <value>1800000</value>
     </property>
   
     <property>
       <name>spark.executor.cores</name>
       <value>4</value>
     </property>
   
     <property>
       <name>hive.server2.idle.operation.timeout</name>
       <value>7200000</value>
     </property>
   
     <property>
       <name>hive.merge.mapfiles</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.mapred.reduce.tasks.speculative.execution</name>
       <value>false</value>
     </property>
   
     <property>
       <name>spark.eventLog.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>spark.shuffle.service.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.server2.builtin.udf.blacklist</name>
       <value>empty_blacklist</value>
     </property>
   
     <property>
       <name>spark.executor.extraJavaOptions</name>
       <value>-Dlog4j.ignoreTCL=true -Dfile.encoding=utf-8</value>
     </property>
   
     <property>
       <name>mapreduce.reduce.speculative</name>
       <value>false</value>
     </property>
   
     <property>
       <name>hive.server2.idle.session.timeout</name>
       <value>1800000</value>
     </property>
   
     <property>
       <name>spark.dynamicAllocation.enabled</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.auto.convert.join.noconditionaltask.size</name>
       <value>1251999744</value>
     </property>
   
     <property>
       <name>hive.stats.fetch.column.stats</name>
       <value>true</value>
     </property>
   
     <property>
       <name>hive.compactor.worker.threads</name>
       <value>4</value>
     </property>
   
   </configuration>
   ```
   @caiiansheng 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] caiiansheng closed issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
caiiansheng closed issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] github-actions[bot] commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007110697


   ### Search before asking
   
   -[X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
   
   
   ### What happened
   
   Version 2.0.2 adds the hive data source. If the hive server version is 2.x.x, it can be added successfully, but the connection to the 3.x.x version of hive is unsuccessful, and the hive cluster does not enable kerberos. The specific errors are as follows
   ![image](https://user-images.githubusercontent.com/26760483/148482116-2cc24b1f-833f-4009-b2f7-f6fa8ecfcd14.png)
   Corresponding log:
   ![image](https://user-images.githubusercontent.com/26760483/148482860-ea89935b-78c7-441e-9a37-f8c9dadbba7b.png)
   ![image](https://user-images.githubusercontent.com/26760483/148482915-3b1216dc-fec6-4ad8-95dc-5bb9505e998e.png)
    If you add the {"auth":"noSasl"} parameter, another error is reported:
   ![image](https://user-images.githubusercontent.com/26760483/148485734-f98e89e9-cb04-4223-b6a7-fa1f2e6f345e.png)
   
   Connect to hive3 in version 2.0.0, add {"auth":"noSasl"} to connect successfully
   
   
   @SbloodyS Last time you said that you solved this problem, please help to see what's going on. Thank you
   
   ### What you expected to happen
   
   add new hive datasource success.
   
   ### How to reproduce
   
   add new hive datasource success.
   
   ### Anything else
   
   The bug occurred in 2.0.2 release, and there was no problem with 2.0 release
   
   ### Version
   
   dev
   
   ### Are you willing to submit PR?
   
   -[] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   -[X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] caiiansheng commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
caiiansheng commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007210326


   @SbloodyS 
   ![image](https://user-images.githubusercontent.com/26760483/148510937-e96be7ee-166f-4555-8b95-f5ed76e3ee9b.png)
    Can you show me your hive configuration parameters


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] SbloodyS edited a comment on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
SbloodyS edited a comment on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007199433






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] caiiansheng commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
caiiansheng commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007235123


   Did you set JDBC parameters when you connected hive data source on the web page?
   ![image](https://user-images.githubusercontent.com/26760483/148515689-9313c300-66a6-47d0-8c88-9aa03b774a48.png)
   
   I don't know why my connection failed!
   logs:
   [WARN] 2022-01-07 16:40:41.989 org.apache.hive.jdbc.HiveConnection:[200] - Failed to connect to 7.185.65.247:10000
   [ERROR] 2022-01-07 16:40:43.004 com.zaxxer.hikari.pool.HikariPool:[594] - HikariPool-1 - Exception during pool initialization.
   java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://7.185.65.247:10000/default: null
   	at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:219)
   	at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:157)
   	at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
   	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
   	at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364)
   	at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
   	at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476)
   	at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561)
   	at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
   	at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112)
   	at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:159)
   	at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:117)
   	at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80)
   	at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:376)
   	at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:431)
   	at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.checkClient(CommonDataSourceClient.java:104)
   	at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.<init>(CommonDataSourceClient.java:55)
   	at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceClient.<init>(HiveDataSourceClient.java:62)
   	at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceChannel.createDataSourceClient(HiveDataSourceChannel.java:29)
   	at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.lambda$getConnection$0(DataSourceClientProvider.java:64)
   	at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
   	at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.getConnection(DataSourceClientProvider.java:58)
   	at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl.checkConnection(DataSourceServiceImpl.java:320)
   	at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$FastClassBySpringCGLIB$$a86d54aa.invoke(<generated>)
   	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
   	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689)
   	at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$EnhancerBySpringCGLIB$$440f402e.checkConnection(<generated>)
   	at org.apache.dolphinscheduler.api.controller.DataSourceController.connectDataSource(DataSourceController.java:215)
   	at org.apache.dolphinscheduler.api.controller.DataSourceController$$FastClassBySpringCGLIB$$835fdd04.invoke(<generated>)
   	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
   	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783)
   	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
   	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
   	at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89)
   	at org.apache.dolphinscheduler.api.aspect.AccessLogAspect.doAround(AccessLogAspect.java:92)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634)
   	at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624)
   	at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72)
   	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
   	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
   	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
   	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
   	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
   	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698)
   	at org.apache.dolphinscheduler.api.controller.DataSourceController$$EnhancerBySpringCGLIB$$5a6b7b6d.connectDataSource(<generated>)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205)
   	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150)
   	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117)
   	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
   	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808)
   	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
   	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067)
   	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963)
   	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006)
   	at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909)
   	at javax.servlet.http.HttpServlet.service(HttpServlet.java:517)
   	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883)
   	at javax.servlet.http.HttpServlet.service(HttpServlet.java:584)
   	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
   	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1631)
   	at com.github.xiaoymin.swaggerbootstrapui.filter.SecurityBasicAuthFilter.doFilter(SecurityBasicAuthFilter.java:84)
   	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
   	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
   	at com.github.xiaoymin.swaggerbootstrapui.filter.ProductionSecurityFilter.doFilter(ProductionSecurityFilter.java:53)
   	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
   	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
   	at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91)
   	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
   	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
   	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
   	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100)
   	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
   	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
   	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
   	at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93)
   	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
   	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
   	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
   	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201)
   	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119)
   	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
   	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
   	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
   	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
   	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)
   	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
   	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
   	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
   	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
   	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
   	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
   	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
   	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   	at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763)
   	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
   	at org.eclipse.jetty.server.Server.handle(Server.java:516)
   	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400)
   	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645)
   	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392)
   	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
   	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
   	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
   	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
   	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137)
   	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
   	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
   	at java.lang.Thread.run(Thread.java:748)
   Caused by: org.apache.thrift.transport.TTransportException: null
   	at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   	at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
   	at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178)
   	at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307)
   	at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
   	at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:195)
   	... 120 common frames omitted
   [INFO] 2022-01-07 16:40:43.005 org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient:[108] - Time to execute check jdbc client with sql select 1 for 1122 ms 
   [ERROR] 2022-01-07 16:40:43.005 org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl:[328] - datasource test connection error, dbType:HIVE, connectionParam:HiveConnectionParam{user='root', password='', address='jdbc:hive2://7.185.65.247:10000', database='default', jdbcUrl='jdbc:hive2://7.185.65.247:10000/default', driverLocation='null', driverClassName='org.apache.hive.jdbc.HiveDriver', validationQuery='select 1', other='null', principal='null', javaSecurityKrb5Conf='null', loginUserKeytabUsername='null', loginUserKeytabPath='null'}, message:JDBC connect failed.
   @SbloodyS 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] github-actions[bot] commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007110783


   Hi:
   * Thank you for your feedback, we have received your issue, Please wait patiently for a reply.
   * In order for us to understand your request as soon as possible, please provide detailed information、version or pictures.
   * If you haven't received a reply for a long time, you can subscribe to the developer's email,Mail subscription steps reference https://dolphinscheduler.apache.org/en-us/community/development/subscribe.html ,Then write the issue URL in the email content and send question to dev@dolphinscheduler.apache.org.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [dolphinscheduler] caiiansheng commented on issue #7868: [Bug] [DataSource] Add hive datasource failed

Posted by GitBox <gi...@apache.org>.
caiiansheng commented on issue #7868:
URL: https://github.com/apache/dolphinscheduler/issues/7868#issuecomment-1007210326






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org