You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user-zh@flink.apache.org by 叶贤勋 <yx...@163.com> on 2020/03/03 08:00:17 UTC

回复: Hive Source With Kerberos认证问题

hive conf应该是对的,前面UserGroupInfomation登录时都是成功的。
datanucleus的依赖不加的话,会报claas not found等异常。
1、java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
2、Caused by: org.datanucleus.exceptions.NucleusUserException: There is no available StoreManager of type "rdbms". Please make sure you have specified "datanucleus.storeManagerType" correctly and that all relevant plugins are in the CLASSPATH



| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制


在2020年03月2日 11:50,Rui Li<li...@apache.org> 写道:
从你贴的log来看似乎是创建了embedded metastore。可以检查一下HiveCatalog是不是读到了不正确的hive
conf?另外你贴的maven的这些依赖都打到你flink作业的jar里了么?像datanucleus的依赖应该是不需要的。

On Sat, Feb 29, 2020 at 10:42 PM 叶贤勋 <yx...@163.com> wrote:

Hi 李锐,感谢你的回复。
前面的问题通过设置yarn.resourcemanager.principal,已经解决。
但是现在出现另外一个问题,请帮忙看看。

背景:flink任务还是source&sink带有kerberos的hive,相同代码在本地进行测试是能通过kerberos认证,并且能够查询和插入数据到hive。但是任务提交到集群就报kerberos认证失败的错误。
Flink:1.9.1, flink-1.9.1/lib/有flink-dist_2.11-1.9.1.jar,
flink-shaded-hadoop-2-uber-2.7.5-7.0.jar,log4j-1.2.17.jar,
slf4j-log4j12-1.7.15.jar
Hive:2.1.1
flink任务主要依赖的jar:
[INFO] +- org.apache.flink:flink-table-api-java:jar:flink-1.9.1:compile
[INFO] |  +- org.apache.flink:flink-table-common:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-core:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-annotations:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-metrics-core:jar:flink-1.9.1:compile
[INFO] |  |     \- com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
[INFO] |  |        +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
[INFO] |  |        \- org.objenesis:objenesis:jar:2.1:compile
[INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
[INFO] |  \- org.apache.flink:force-shading:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-table-planner-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.scala-lang:scala-reflect:jar:2.11.12:compile
[INFO] |  |  \- org.scala-lang:scala-compiler:jar:2.11.12:compile
[INFO] |  +-
org.apache.flink:flink-table-api-java-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.apache.flink:flink-java:jar:flink-1.9.1:compile
[INFO] |  |  \-
org.apache.flink:flink-streaming-java_2.11:jar:1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-runtime-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.codehaus.janino:janino:jar:3.0.9:compile
[INFO] |  |  \- org.apache.calcite.avatica:avatica-core:jar:1.15.0:compile
[INFO] |  \- org.reflections:reflections:jar:0.9.10:compile
[INFO] +- org.apache.flink:flink-table-planner_2.11:jar:flink-1.9.1:compile
[INFO] +- org.apache.commons:commons-lang3:jar:3.9:compile
[INFO] +- com.typesafe.akka:akka-actor_2.11:jar:2.5.21:compile
[INFO] |  +- org.scala-lang:scala-library:jar:2.11.8:compile
[INFO] |  +- com.typesafe:config:jar:1.3.3:compile
[INFO] |  \-
org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
[INFO] +- org.apache.flink:flink-sql-client_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-clients_2.11:jar:1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-optimizer_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-streaming-scala_2.11:jar:1.9.1:compile
[INFO] |  +- log4j:log4j:jar:1.2.17:compile
[INFO] |  \- org.apache.flink:flink-shaded-jackson:jar:2.9.8-7.0:compile
[INFO] +- org.apache.flink:flink-json:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-csv:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-hbase_2.11:jar:1.9.1:compile
[INFO] +- org.apache.hbase:hbase-server:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-protobuf:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-netty:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.2.1:compile
[INFO] |  |  \-
com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
[INFO] |  +- org.apache.hbase:hbase-common:jar:2.2.1:compile
[INFO] |  |  \-
com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
[INFO] |  +- org.apache.hbase:hbase-http:jar:2.2.1:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-util:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-util-ajax:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-http:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-security:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.glassfish.jersey.core:jersey-server:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile
[INFO] |  |  |  |  +-
org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-client:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.media:jersey-media-jaxb:jar:2.25.1:compile
[INFO] |  |  |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] |  |  |  +- org.glassfish.hk2:hk2-api:jar:2.5.0-b32:compile
[INFO] |  |  |  |  +- org.glassfish.hk2:hk2-utils:jar:2.5.0-b32:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32:compile
[INFO] |  |  |  +-
org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile
[INFO] |  |  |  \- org.glassfish.hk2:hk2-locator:jar:2.5.0-b32:compile
[INFO] |  |  +-
org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile
[INFO] |  |  \- javax.ws.rs:javax.ws.rs-api:jar:2.0.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol-shaded:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-procedure:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-client:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-zookeeper:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-replication:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics-api:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics:jar:2.2.1:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.10:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop-compat:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop2-compat:jar:2.2.1:compile
[INFO] |  +- org.eclipse.jetty:jetty-server:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-io:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-servlet:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-webapp:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-xml:jar:9.3.27.v20190418:compile
[INFO] |  +- org.glassfish.web:javax.servlet.jsp:jar:2.3.2:compile
[INFO] |  |  \- org.glassfish:javax.el:jar:3.0.1-b11:compile (version
selected from constraint [3.0.0,))
[INFO] |  +- javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1:compile
[INFO] |  +- io.dropwizard.metrics:metrics-core:jar:3.2.6:compile
[INFO] |  +- commons-io:commons-io:jar:2.5:compile
[INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:compile
[INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
[INFO] |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile
[INFO] |  +- org.apache.htrace:htrace-core4:jar:4.2.0-incubating:compile
[INFO] |  +- com.lmax:disruptor:jar:3.3.6:compile
[INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
[INFO] |  +- org.apache.commons:commons-crypto:jar:1.0.0:compile
[INFO] |  +- org.apache.hadoop:hadoop-distcp:jar:2.8.5:compile
[INFO] |  \- org.apache.yetus:audience-annotations:jar:0.5.0:compile
[INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
[INFO] +- mysql:mysql-connector-java:jar:8.0.18:compile
[INFO] +- org.apache.flink:flink-connector-hive_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-hadoop-compatibility_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-shaded-hadoop-2-uber:jar:2.7.5-7.0:provided
[INFO] +- org.apache.hive:hive-exec:jar:2.1.1:compile
[INFO] |  +- org.apache.hive:hive-ant:jar:2.1.1:compile
[INFO] |  |  \- org.apache.velocity:velocity:jar:1.5:compile
[INFO] |  |     \- oro:oro:jar:2.0.8:compile
[INFO] |  +- org.apache.hive:hive-llap-tez:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive:hive-common:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-storage-api:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-orc:jar:2.1.1:compile
[INFO] |  |  |  |  \- org.iq80.snappy:snappy:jar:0.2:compile
[INFO] |  |  |  +-
org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jta_1.1_spec:jar:1.1.1:compile
[INFO] |  |  |  |  +- javax.mail:mail:jar:1.4.1:compile
[INFO] |  |  |  |  +- javax.activation:activation:jar:1.1:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jaspic_1.0_spec:jar:1.0:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-annotation_1.0_spec:jar:1.1.1:compile
[INFO] |  |  |  |  \- asm:asm-commons:jar:3.1:compile
[INFO] |  |  |  |     \- asm:asm-tree:jar:3.1:compile
[INFO] |  |  |  |        \- asm:asm:jar:3.1:compile
[INFO] |  |  |  +-
org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
[INFO] |  |  |  +- joda-time:joda-time:jar:2.8.1:compile
[INFO] |  |  |  +- org.json:json:jar:20160810:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
[INFO] |  |  |  \-
com.github.joshelser:dropwizard-metrics-hadoop-metrics2-reporter:jar:0.1.2:compile
[INFO] |  |  \- org.apache.hive:hive-llap-client:jar:2.1.1:compile
[INFO] |  |     \- org.apache.hive:hive-llap-common:jar:2.1.1:compile
[INFO] |  |        \- org.apache.hive:hive-serde:jar:2.1.1:compile
[INFO] |  |           +- org.apache.hive:hive-service-rpc:jar:2.1.1:compile
[INFO] |  |           |  +- tomcat:jasper-compiler:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:jsp-api:jar:2.0:compile
[INFO] |  |           |  |  \- ant:ant:jar:1.6.5:compile
[INFO] |  |           |  +- tomcat:jasper-runtime:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:servlet-api:jar:2.4:compile
[INFO] |  |           |  |  \- commons-el:commons-el:jar:1.0:compile
[INFO] |  |           |  \- org.apache.thrift:libfb303:jar:0.9.3:compile
[INFO] |  |           +- org.apache.avro:avro:jar:1.7.7:compile
[INFO] |  |           |  \-
com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO] |  |           +- net.sf.opencsv:opencsv:jar:2.3:compile
[INFO] |  |           \-
org.apache.parquet:parquet-hadoop-bundle:jar:1.8.1:compile
[INFO] |  +- org.apache.hive:hive-shims:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-common:jar:2.1.1:compile
[INFO] |  |  |  \- org.apache.thrift:libthrift:jar:0.9.3:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-0.23:jar:2.1.1:runtime
[INFO] |  |  |  \-
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.6.1:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-annotations:jar:2.6.1:runtime
[INFO] |  |  |     +-
com.google.inject.extensions:guice-servlet:jar:3.0:runtime
[INFO] |  |  |     +- com.google.inject:guice:jar:3.0:runtime
[INFO] |  |  |     |  +- javax.inject:javax.inject:jar:1:runtime
[INFO] |  |  |     |  \- aopalliance:aopalliance:jar:1.0:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-json:jar:1.9:runtime
[INFO] |  |  |     |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:runtime
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-core-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-mapper-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:runtime
[INFO] |  |  |     |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:runtime
[INFO] |  |  |     +- com.sun.jersey.contribs:jersey-guice:jar:1.9:runtime
[INFO] |  |  |     |  \- com.sun.jersey:jersey-server:jar:1.9:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-common:jar:2.6.1:runtime
[INFO] |  |  |     +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.1:runtime
[INFO] |  |  |     +- javax.xml.bind:jaxb-api:jar:2.2.2:runtime
[INFO] |  |  |     |  \- javax.xml.stream:stax-api:jar:1.0-2:runtime
[INFO] |  |  |     +- org.codehaus.jettison:jettison:jar:1.1:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-core:jar:1.9:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-client:jar:1.9:runtime
[INFO] |  |  |     +- org.mortbay.jetty:jetty-util:jar:6.1.26:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.1:runtime
[INFO] |  |  |     |  \-
org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:2.6.1:runtime
[INFO] |  |  |     \-
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:2.6.1:runtime
[INFO] |  |  |        \- org.mortbay.jetty:jetty:jar:6.1.26:runtime
[INFO] |  |  \-
org.apache.hive.shims:hive-shims-scheduler:jar:2.1.1:runtime
[INFO] |  +- commons-httpclient:commons-httpclient:jar:3.0.1:compile
[INFO] |  +- org.antlr:antlr-runtime:jar:3.4:compile
[INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:compile
[INFO] |  |  \- antlr:antlr:jar:2.7.7:compile
[INFO] |  +- org.antlr:ST4:jar:4.0.4:compile
[INFO] |  +- org.apache.ant:ant:jar:1.9.1:compile
[INFO] |  |  \- org.apache.ant:ant-launcher:jar:1.9.1:compile
[INFO] |  +- org.apache.commons:commons-compress:jar:1.10:compile
[INFO] |  +- org.apache.ivy:ivy:jar:2.4.0:compile
[INFO] |  +- org.apache.curator:curator-framework:jar:2.6.0:compile
[INFO] |  |  \- org.apache.curator:curator-client:jar:2.6.0:compile
[INFO] |  +- org.apache.curator:apache-curator:pom:2.6.0:compile
[INFO] |  +- org.codehaus.groovy:groovy-all:jar:2.4.4:compile
[INFO] |  +- org.apache.calcite:calcite-core:jar:1.6.0:compile
[INFO] |  |  +- org.apache.calcite:calcite-linq4j:jar:1.6.0:compile
[INFO] |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile
[INFO] |  |  |  \- commons-pool:commons-pool:jar:1.5.4:compile
[INFO] |  |  +- net.hydromatic:aggdesigner-algorithm:jar:6.0:compile
[INFO] |  |  \- org.codehaus.janino:commons-compiler:jar:2.7.6:compile
[INFO] |  +- org.apache.calcite:calcite-avatica:jar:1.6.0:compile
[INFO] |  +- stax:stax-api:jar:1.0.1:compile
[INFO] |  \- jline:jline:jar:2.12:compile
[INFO] +- org.datanucleus:datanucleus-core:jar:4.1.6:compile
[INFO] +- org.datanucleus:datanucleus-api-jdo:jar:4.2.4:compile
[INFO] +- org.datanucleus:javax.jdo:jar:3.2.0-m3:compile
[INFO] |  \- javax.transaction:transaction-api:jar:1.1:compile
[INFO] +- org.datanucleus:datanucleus-rdbms:jar:4.1.9:compile
[INFO] +- hadoop-lzo:hadoop-lzo:jar:0.4.14:compile
[INFO] \- org.apache.flink:flink-runtime-web_2.11:jar:1.9.1:provided
[INFO]    +- org.apache.flink:flink-runtime_2.11:jar:1.9.1:compile
[INFO]    |  +-
org.apache.flink:flink-queryable-state-client-java:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-hadoop-fs:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-shaded-asm-6:jar:6.2.1-7.0:compile
[INFO]    |  +- com.typesafe.akka:akka-stream_2.11:jar:2.5.21:compile
[INFO]    |  |  +- org.reactivestreams:reactive-streams:jar:1.0.2:compile
[INFO]    |  |  \- com.typesafe:ssl-config-core_2.11:jar:0.3.7:compile
[INFO]    |  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21:compile
[INFO]    |  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.11:compile
[INFO]    |  +- org.clapper:grizzled-slf4j_2.11:jar:1.3.2:compile
[INFO]    |  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
[INFO]    |  +- org.xerial.snappy:snappy-java:jar:1.1.4:compile
[INFO]    |  \- com.twitter:chill_2.11:jar:0.7.6:compile
[INFO]    |     \- com.twitter:chill-java:jar:0.7.6:compile
[INFO]    +-
org.apache.flink:flink-shaded-netty:jar:4.1.32.Final-7.0:compile
[INFO]    +- org.apache.flink:flink-shaded-guava:jar:18.0-7.0:compile
[INFO]    \- org.javassist:javassist:jar:3.19.0-GA:compile
[INFO] ————————————————————————————————————

日志:

2020-02-28 17:17:07,890 INFO
org.apache.hadoop.security.UserGroupInformation               - Login
successful for user ***/dev@***.COM using keytab file /home/***/key.keytab
上面这条是flink日志打印的,从这条日志可以看出 kerberos认证是通过的,能够正常登录,但还是报了以下异常:
2020-02-28 17:17:08,658 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Setting MetaStore object pin classes with
hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
2020-02-28 17:17:09,280 INFO
org.apache.hadoop.hive.metastore.MetaStoreDirectSql           - Using
direct SQL, underlying DB is MYSQL
2020-02-28 17:17:09,283 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Initialized ObjectStore
2020-02-28 17:17:09,450 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
admin role in metastore
2020-02-28 17:17:09,452 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
public role in metastore
2020-02-28 17:17:09,474 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - No user is
added in admin role, since config is empty
2020-02-28 17:17:09,634 INFO
org.apache.flink.table.catalog.hive.HiveCatalog               - Connected
to Hive metastore
2020-02-28 17:17:09,635 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - 0:
get_database: ***
2020-02-28 17:17:09,637 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore.audit          - ugi=***
ip=unknown-ip-addr cmd=get_database: ***
2020-02-28 17:17:09,658 INFO  org.apache.hadoop.hive.ql.metadata.HiveUtils
- Adding metastore authorization provider:
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
2020-02-28 17:17:10,166 WARN
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory       - The
short-circuit local reads feature cannot be used because libhadoop cannot
be loaded.
2020-02-28 17:17:10,391 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,397 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,398 INFO
org.apache.hadoop.io.retry.RetryInvocationHandler             - Exception
while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over
******.org/***.***.***.***:8020 after 1 fail over attempts. Trying to fail
over immediately.
java.io.IOException: Failed on local exception: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
"***.***.***.org/***.***.***.***"; destination host is: "******.org":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy41.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy42.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
at
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at
org.apache.hadoop.hive.common.FileUtils.getFileStatusOrNull(FileUtils.java:770)
at
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:368)
at
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:343)
at
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:152)
at
org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadDatabase(AuthorizationPreEventListener.java:204)
at
org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:152)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2153)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:932)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy35.get_database(Unknown Source)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:150)
at com.sun.proxy.$Proxy36.getDatabase(Unknown Source)
at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.
getDatabase(HiveMetastoreClientWrapper.java:102)
at
org.apache.flink.table.catalog.hive.HiveCatalog.databaseExists(HiveCatalog.java:347)
at
org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:244)
at
org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)

……在这段省略的代码里做了UserGroupInformation.loginUserFromKeytab(principal,keytab);并成功通过认证
at this is my code.main(MyMainClass.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
at
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
at
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
... 67 more
Caused by: org.apache.hadoop.security.AccessControlException: Client
cannot authenticate via:[TOKEN, KERBEROS]
at
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
at
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
... 70 more
目前诊断看起来像是jar被污染导致。麻烦请指点一二。谢谢!

叶贤勋
yxx_cmhd@163.com

<https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D>
签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制

在2020年02月28日 15:16,Rui Li<li...@apache.org> <li...@apache.org> 写道:

Hi 叶贤勋,


我手头上没有kerberos的环境,从TokenCache的代码(2.7.5版本)看起来,这个异常可能是因为没有正确拿到RM的地址或者principal。请检查一下下面这几个配置:
mapreduce.framework.name
yarn.resourcemanager.address
yarn.resourcemanager.principal
以及你的flink的作业是否能读到这些配置

On Fri, Feb 28, 2020 at 11:10 AM Kurt Young <yk...@gmail.com> wrote:

cc @lirui@apache.org <li...@apache.org>

Best,
Kurt


On Thu, Feb 13, 2020 at 10:22 AM 叶贤勋 <yx...@163.com> wrote:

Hi 大家好:
在做hive2.1.1 source带Kerberos认证有个异常请教下大家。
flink 版本1.9
hive 版本2.1.1,实现了HiveShimV211。
代码:
public class HiveCatalogTest {
private static final Logger LOG =
LoggerFactory.getLogger(HiveCatalogTest.class);
private String hiveConfDir = "/Users/yexianxun/dev/env/test-hive"; //
a local path
private TableEnvironment tableEnv;
private HiveCatalog hive;
private String hiveName;
private String hiveDB;
private String version;


@Before
public void before() {
EnvironmentSettings settings =
EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inBatchMode()
.build();
tableEnv = TableEnvironment.create(settings);
hiveName = "myhive";
hiveDB = "sloth";
version = "2.1.1";
}


@Test
public void testCatalogQuerySink() throws Exception {
hive = new HiveCatalog(hiveName, hiveDB, hiveConfDir, version);
System.setProperty("java.security.krb5.conf", hiveConfDir +
"/krb5.conf");
tableEnv.getConfig().getConfiguration().setString("stream_mode",
"false");
tableEnv.registerCatalog(hiveName, hive);
tableEnv.useCatalog(hiveName);
String query = "select * from " + hiveName + "." + hiveDB +
".testtbl2 where id = 20200202";
Table table = tableEnv.sqlQuery(query);
String newTableName = "testtbl2_1";
table.insertInto(hiveName, hiveDB, newTableName);
tableEnv.execute("test");
}
}


HiveMetastoreClientFactory:
public static HiveMetastoreClientWrapper create(HiveConf hiveConf,
String hiveVersion) {
Preconditions.checkNotNull(hiveVersion, "Hive version cannot be
null");
if (System.getProperty("java.security.krb5.conf") != null) {
if (System.getProperty("had_set_kerberos") == null) {
String principal = "sloth/dev@BDMS.163.COM";
String keytab =
"/Users/yexianxun/dev/env/mammut-test-hive/sloth.keytab";
try {
sun.security.krb5.Config.refresh();
UserGroupInformation.setConfiguration(hiveConf);
UserGroupInformation.loginUserFromKeytab(principal,
keytab);
System.setProperty("had_set_kerberos", "true");
} catch (Exception e) {
LOG.error("", e);
}
}
}
return new HiveMetastoreClientWrapper(hiveConf, hiveVersion);
}


HiveCatalog:
private static HiveConf createHiveConf(@Nullable String hiveConfDir) {
LOG.info("Setting hive conf dir as {}", hiveConfDir);
try {
HiveConf.setHiveSiteLocation(
hiveConfDir == null ?
null : Paths.get(hiveConfDir,
"hive-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(
String.format("Failed to get hive-site.xml from %s",
hiveConfDir), e);
}


// create HiveConf from hadoop configuration
HiveConf hiveConf = new
HiveConf(HadoopUtils.getHadoopConfiguration(new
org.apache.flink.configuration.Configuration()),
HiveConf.class);
try {
hiveConf.addResource(Paths.get(hiveConfDir,
"hdfs-site.xml").toUri().toURL());
hiveConf.addResource(Paths.get(hiveConfDir,
"core-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(String.format("Failed to get
hdfs|core-site.xml from %s", hiveConfDir), e);
}
return hiveConf;
}


在执行testCatalogQuerySink方法报以下错误:
org.apache.flink.runtime.client.JobExecutionException: Could not retrieve
JobResult.


at

org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:622)
at

org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
at

org.apache.flink.table.planner.delegation.BatchExecutor.execute(BatchExecutor.java:55)
at

org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:410)
at api.HiveCatalogTest.testCatalogQuerySink(HiveCatalogMumTest.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at

org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at

org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at

org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at

org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at

org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at

org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at

org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at

com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at

com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at

com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed
to submit job.
at

org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
at

java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at

java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at

java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at

akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at

akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at

akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException:
org.apache.flink.runtime.client.JobExecutionException: Could not set up
JobManager
at

org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
at

java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
not set up JobManager
at

org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
at

org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
at

org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
at

org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
... 7 more
Caused by: org.apache.flink.runtime.JobException: Creating the input
splits caused an error: Can't get Master Kerberos principal for use as
renewer
at

org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:270)
at

org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:907)
at

org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:230)
at

org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
at

org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
at

org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
at

org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
at

org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
at

org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
at

org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at

org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at

org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
... 10 more
Caused by: java.io.IOException: Can't get Master Kerberos principal for
use as renewer
at

org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
at

org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at

org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at

org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
at

org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at

org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:159)
at

org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:63)
at

org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:256)
... 22 more


测试sink的方法是能够正常插入数据,但是在hive source时报这个错误,感觉是获取deleg
token时返回空导致的。不知道具体应该怎么解决





| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制




Re: Hive Source With Kerberos认证问题

Posted by Rui Li <li...@apache.org>.
那提交作业的用户跟你login的用户是同一个用户不?我怀疑是不是因为用户不同,所以metastore的client没有用到正确的credentials。

On Tue, Mar 10, 2020 at 6:42 PM 叶贤勋 <yx...@163.com> wrote:

> 在doAs方法中是可以的。我现在hive connector中操作hive涉及认证的代码都在doAs中执行,可以解决认证问题。
> 前面提到的stacktrace是用我们公司自己封装的hive-exec
> jar打印出来的,所以跟源码对应不上,我用官网的hive-exec-2.1.1.jar也是有这个问题。
>
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年03月5日 13:52,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
>
> 能不能先用doAs的方式来试一下,比如注册HiveCatalog的部分在UserGroupInformation.getLoginUser().doAs()里做,排查下是不是HiveMetaStoreClient没有用上你登录用户的信息。
> 另外你的hive版本是2.1.1么?从stacktrace上来看跟2.1.1的代码对不上,比如
> HiveMetaStoreClient.java的第562行:
>
> https://github.com/apache/hive/blob/rel/release-2.1.1/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L562
>
> On Wed, Mar 4, 2020 at 9:17 PM 叶贤勋 <yx...@163.com> wrote:
>
> 你好,
> datanucleus jar的包的问题已经解决,之前应该是没有通过hive.metastore.uris进行连接访问HMS。
> 我在HiveCatalog的open方法里面做了Kerberos登录,
> UserGroupInformation.loginUserFromKeytab(principal, keytabPath);
> 并且已经登录成功。按理说Kerberos登录成功后在这个进程就应该有权限访问metastore了吧。但是在创建megastore
> client时报了以下错误。
>
> 2020-03-04 20:23:17,191 DEBUG
> org.apache.flink.table.catalog.hive.HiveCatalog               - Hive
> MetaStore Uris is thrift://***1:9083,thrift://***2:9083.
> 2020-03-04 20:23:17,192 INFO
> org.apache.flink.table.catalog.hive.HiveCatalog               - Created
> HiveCatalog 'myhive'
> 2020-03-04 20:23:17,360 INFO
> org.apache.hadoop.security.UserGroupInformation               - Login
> successful for user ***/dev@***.COM using keytab file
>
> /Users/yexianxun/IdeaProjects/flink-1.9.0/build-target/examples/hive/kerberos/key.keytab
> 2020-03-04 20:23:17,360 DEBUG
> org.apache.flink.table.catalog.hive.HiveCatalog               - login user
> by kerberos, principal is ***/dev@***.CO, login is true
> 2020-03-04 20:23:17,374 INFO
> org.apache.curator.framework.imps.CuratorFrameworkImpl        - Starting
> 2020-03-04 20:23:17,374 DEBUG org.apache.curator.CuratorZookeeperClient
> - Starting
> 2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState
> - Starting
> 2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState
> - reset
> 2020-03-04 20:23:17,374 INFO  org.apache.zookeeper.ZooKeeper
> - Initiating client connection,
> connectString=***1:2181,***2:2181,***3:2181 sessionTimeout=60000
> watcher=org.apache.curator.ConnectionState@6b52dd31
> 2020-03-04 20:23:17,379 DEBUG
> org.apache.zookeeper.client.ZooKeeperSaslClient               - JAAS
> loginContext is: HiveZooKeeperClient
> 2020-03-04 20:23:17,381 WARN  org.apache.zookeeper.ClientCnxn
> - SASL configuration failed:
> javax.security.auth.login.LoginException: Unable to obtain password from
> user
> Will continue connection to Zookeeper server without SASL authentication,
> if Zookeeper server allows it.
> 2020-03-04 20:23:17,381 INFO  org.apache.zookeeper.ClientCnxn
> - Opening socket connection to server ***1:2181
> 2020-03-04 20:23:17,381 ERROR org.apache.curator.ConnectionState
> - Authentication failed
> 2020-03-04 20:23:17,384 INFO  org.apache.zookeeper.ClientCnxn
> - Socket connection established to ***1:2181, initiating
> session
> 2020-03-04 20:23:17,384 DEBUG org.apache.zookeeper.ClientCnxn
> - Session establishment request sent on ***1:2181
> 2020-03-04 20:23:17,393 INFO  org.apache.zookeeper.ClientCnxn
> - Session establishment complete on server ***1:2181,
> sessionid = 0x16f7af0645c25a8, negotiated timeout = 40000
> 2020-03-04 20:23:17,393 INFO
> org.apache.curator.framework.state.ConnectionStateManager     - State
> change: CONNECTED
> 2020-03-04 20:23:17,397 DEBUG org.apache.zookeeper.ClientCnxn
> - Reading reply sessionid:0x16f7af0645c25a8, packet::
> clientPath:null serverPath:null finished:false header:: 1,3  replyHeader::
> 1,292064345364,0  request:: '/hive_base,F  response::
>
> s{17179869635,17179869635,1527576303010,1527576303010,0,3,0,0,0,1,249117832596}
> 2020-03-04 20:23:17,400 DEBUG org.apache.zookeeper.ClientCnxn
> - Reading reply sessionid:0x16f7af0645c25a8, packet::
> clientPath:null serverPath:null finished:false header:: 2,12  replyHeader::
> 2,292064345364,0  request:: '/hive_base/namespaces/hive/uris,F  response::
>
> v{'dGhyaWZ0Oi8vaHphZGctYmRtcy03LnNlcnZlci4xNjMub3JnOjkwODM=,'dGhyaWZ0Oi8vaHphZGctYmRtcy04LnNlcnZlci4xNjMub3JnOjkwODM=},s{17179869664,17179869664,1527576306106,1527576306106,0,1106,0,0,0,2,292063632993}
> 2020-03-04 20:23:17,401 INFO  hive.metastore
> - atlasProxy is set to
> 2020-03-04 20:23:17,401 INFO  hive.metastore
> - Trying to connect to metastore with URI thrift://
> hzadg-bdms-7.server.163.org:9083
> 2020-03-04 20:23:17,408 INFO  hive.metastore
> - tokenStrForm should not be null for querynull
> 2020-03-04 20:23:17,432 DEBUG org.apache.thrift.transport.TSaslTransport
> - opening transport
> org.apache.thrift.transport.TSaslClientTransport@3c69362a
> 2020-03-04 20:23:17,441 ERROR org.apache.thrift.transport.TSaslTransport
> - SASL negotiation failure
> javax.security.sasl.SaslException: GSS initiate failed [Caused by
> GSSException: No valid credentials provided (Mechanism level: Failed to
> find any Kerberos tgt)]
> at
>
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at
>
> org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
> at
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
> at
>
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
> at
>
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
> at
>
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
> at
> org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
> at
>
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
> at
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
> ......业务处理逻辑......
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
> at
>
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
> at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
> at
>
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
> at
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
> at
>
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
> at
>
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: GSSException: No valid credentials provided (Mechanism level:
> Failed to find any Kerberos tgt)
> at
>
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> at
>
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
> at
>
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
> at
>
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
> at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
> at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
> at
>
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
> ... 42 more
> 2020-03-04 20:23:17,443 DEBUG org.apache.thrift.transport.TSaslTransport
> - CLIENT: Writing message with status BAD and payload
> length 19
> 2020-03-04 20:23:17,445 WARN  hive.metastore
> - Failed to connect to the MetaStore Server...
> org.apache.thrift.transport.TTransportException: GSS initiate failed
> at
>
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
> at
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
> at
>
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
> at
>
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
> at
>
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
> at
>
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
> at
> org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
> at
>
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
> at
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
> ......业务处理逻辑......
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
> at
>
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
> at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
> at
>
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
> at
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
> at
>
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
> at
>
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
>
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <
> https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D
> >
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年03月3日 19:04,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> datanucleus是在HMS端使用的,如果没有datanucleus会报错的话说明你的代码在尝试创建embedded
> metastore。这是预期的行为么?我理解你们应该是有一个远端的HMS,然后希望HiveCatalog去连接这个HMS吧?
>
> On Tue, Mar 3, 2020 at 4:00 PM 叶贤勋 <yx...@163.com> wrote:
>
> hive conf应该是对的,前面UserGroupInfomation登录时都是成功的。
> datanucleus的依赖不加的话,会报claas not found等异常。
> 1、java.lang.ClassNotFoundException:
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory
> 2、Caused by: org.datanucleus.exceptions.NucleusUserException: There is no
> available StoreManager of type "rdbms". Please make sure you have specified
> "datanucleus.storeManagerType" correctly and that all relevant plugins are
> in the CLASSPATH
>
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <
>
> https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D
>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年03月2日 11:50,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> 从你贴的log来看似乎是创建了embedded metastore。可以检查一下HiveCatalog是不是读到了不正确的hive
> conf?另外你贴的maven的这些依赖都打到你flink作业的jar里了么?像datanucleus的依赖应该是不需要的。
>
> On Sat, Feb 29, 2020 at 10:42 PM 叶贤勋 <yx...@163.com> wrote:
>
> Hi 李锐,感谢你的回复。
> 前面的问题通过设置yarn.resourcemanager.principal,已经解决。
> 但是现在出现另外一个问题,请帮忙看看。
>
>
>
>
> 背景:flink任务还是source&sink带有kerberos的hive,相同代码在本地进行测试是能通过kerberos认证,并且能够查询和插入数据到hive。但是任务提交到集群就报kerberos认证失败的错误。
> Flink:1.9.1, flink-1.9.1/lib/有flink-dist_2.11-1.9.1.jar,
> flink-shaded-hadoop-2-uber-2.7.5-7.0.jar,log4j-1.2.17.jar,
> slf4j-log4j12-1.7.15.jar
> Hive:2.1.1
> flink任务主要依赖的jar:
> [INFO] +- org.apache.flink:flink-table-api-java:jar:flink-1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-table-common:jar:flink-1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-core:jar:flink-1.9.1:compile
> [INFO] |  |     +-
> org.apache.flink:flink-annotations:jar:flink-1.9.1:compile
> [INFO] |  |     +-
> org.apache.flink:flink-metrics-core:jar:flink-1.9.1:compile
> [INFO] |  |     \- com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [INFO] |  |        +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
> [INFO] |  |        \- org.objenesis:objenesis:jar:2.1:compile
> [INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [INFO] |  \- org.apache.flink:force-shading:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-table-planner-blink_2.11:jar:flink-1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-scala_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.scala-lang:scala-reflect:jar:2.11.12:compile
> [INFO] |  |  \- org.scala-lang:scala-compiler:jar:2.11.12:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-java-bridge_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.apache.flink:flink-java:jar:flink-1.9.1:compile
> [INFO] |  |  \-
> org.apache.flink:flink-streaming-java_2.11:jar:1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-scala-bridge_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-scala_2.11:jar:flink-1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-runtime-blink_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.codehaus.janino:janino:jar:3.0.9:compile
> [INFO] |  |  \- org.apache.calcite.avatica:avatica-core:jar:1.15.0:compile
> [INFO] |  \- org.reflections:reflections:jar:0.9.10:compile
> [INFO] +- org.apache.flink:flink-table-planner_2.11:jar:flink-1.9.1:compile
> [INFO] +- org.apache.commons:commons-lang3:jar:3.9:compile
> [INFO] +- com.typesafe.akka:akka-actor_2.11:jar:2.5.21:compile
> [INFO] |  +- org.scala-lang:scala-library:jar:2.11.8:compile
> [INFO] |  +- com.typesafe:config:jar:1.3.3:compile
> [INFO] |  \-
> org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
> [INFO] +- org.apache.flink:flink-sql-client_2.11:jar:1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-clients_2.11:jar:1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-optimizer_2.11:jar:1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-streaming-scala_2.11:jar:1.9.1:compile
> [INFO] |  +- log4j:log4j:jar:1.2.17:compile
> [INFO] |  \- org.apache.flink:flink-shaded-jackson:jar:2.9.8-7.0:compile
> [INFO] +- org.apache.flink:flink-json:jar:1.9.1:compile
> [INFO] +- org.apache.flink:flink-csv:jar:1.9.1:compile
> [INFO] +- org.apache.flink:flink-hbase_2.11:jar:1.9.1:compile
> [INFO] +- org.apache.hbase:hbase-server:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-protobuf:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-netty:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.2.1:compile
> [INFO] |  |  \-
> com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
> [INFO] |  +- org.apache.hbase:hbase-common:jar:2.2.1:compile
> [INFO] |  |  \-
> com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
> [INFO] |  +- org.apache.hbase:hbase-http:jar:2.2.1:compile
> [INFO] |  |  +- org.eclipse.jetty:jetty-util:jar:9.3.27.v20190418:compile
> [INFO] |  |  +-
> org.eclipse.jetty:jetty-util-ajax:jar:9.3.27.v20190418:compile
> [INFO] |  |  +- org.eclipse.jetty:jetty-http:jar:9.3.27.v20190418:compile
> [INFO] |  |  +-
> org.eclipse.jetty:jetty-security:jar:9.3.27.v20190418:compile
> [INFO] |  |  +- org.glassfish.jersey.core:jersey-server:jar:2.25.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile
> [INFO] |  |  |  |  +-
> org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile
> [INFO] |  |  |  |  \-
> org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.core:jersey-client:jar:2.25.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.media:jersey-media-jaxb:jar:2.25.1:compile
> [INFO] |  |  |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
> [INFO] |  |  |  +- org.glassfish.hk2:hk2-api:jar:2.5.0-b32:compile
> [INFO] |  |  |  |  +- org.glassfish.hk2:hk2-utils:jar:2.5.0-b32:compile
> [INFO] |  |  |  |  \-
> org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32:compile
> [INFO] |  |  |  +-
> org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile
> [INFO] |  |  |  \- org.glassfish.hk2:hk2-locator:jar:2.5.0-b32:compile
> [INFO] |  |  +-
>
>
>
> org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile
> [INFO] |  |  \- javax.ws.rs:javax.ws.rs-api:jar:2.0.1:compile
> [INFO] |  +- org.apache.hbase:hbase-protocol:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-protocol-shaded:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-procedure:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-client:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-zookeeper:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-replication:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-metrics-api:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-metrics:jar:2.2.1:compile
> [INFO] |  +- commons-codec:commons-codec:jar:1.10:compile
> [INFO] |  +- org.apache.hbase:hbase-hadoop-compat:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-hadoop2-compat:jar:2.2.1:compile
> [INFO] |  +- org.eclipse.jetty:jetty-server:jar:9.3.27.v20190418:compile
> [INFO] |  |  \- org.eclipse.jetty:jetty-io:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.eclipse.jetty:jetty-servlet:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.eclipse.jetty:jetty-webapp:jar:9.3.27.v20190418:compile
> [INFO] |  |  \- org.eclipse.jetty:jetty-xml:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.glassfish.web:javax.servlet.jsp:jar:2.3.2:compile
> [INFO] |  |  \- org.glassfish:javax.el:jar:3.0.1-b11:compile (version
> selected from constraint [3.0.0,))
> [INFO] |  +- javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1:compile
> [INFO] |  +- io.dropwizard.metrics:metrics-core:jar:3.2.6:compile
> [INFO] |  +- commons-io:commons-io:jar:2.5:compile
> [INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:compile
> [INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
> [INFO] |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile
> [INFO] |  +- org.apache.htrace:htrace-core4:jar:4.2.0-incubating:compile
> [INFO] |  +- com.lmax:disruptor:jar:3.3.6:compile
> [INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
> [INFO] |  +- org.apache.commons:commons-crypto:jar:1.0.0:compile
> [INFO] |  +- org.apache.hadoop:hadoop-distcp:jar:2.8.5:compile
> [INFO] |  \- org.apache.yetus:audience-annotations:jar:0.5.0:compile
> [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
> [INFO] +- mysql:mysql-connector-java:jar:8.0.18:compile
> [INFO] +- org.apache.flink:flink-connector-hive_2.11:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-hadoop-compatibility_2.11:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-shaded-hadoop-2-uber:jar:2.7.5-7.0:provided
> [INFO] +- org.apache.hive:hive-exec:jar:2.1.1:compile
> [INFO] |  +- org.apache.hive:hive-ant:jar:2.1.1:compile
> [INFO] |  |  \- org.apache.velocity:velocity:jar:1.5:compile
> [INFO] |  |     \- oro:oro:jar:2.0.8:compile
> [INFO] |  +- org.apache.hive:hive-llap-tez:jar:2.1.1:compile
> [INFO] |  |  +- org.apache.hive:hive-common:jar:2.1.1:compile
> [INFO] |  |  |  +- org.apache.hive:hive-storage-api:jar:2.1.1:compile
> [INFO] |  |  |  +- org.apache.hive:hive-orc:jar:2.1.1:compile
> [INFO] |  |  |  |  \- org.iq80.snappy:snappy:jar:0.2:compile
> [INFO] |  |  |  +-
> org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-jta_1.1_spec:jar:1.1.1:compile
> [INFO] |  |  |  |  +- javax.mail:mail:jar:1.4.1:compile
> [INFO] |  |  |  |  +- javax.activation:activation:jar:1.1:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-jaspic_1.0_spec:jar:1.0:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-annotation_1.0_spec:jar:1.1.1:compile
> [INFO] |  |  |  |  \- asm:asm-commons:jar:3.1:compile
> [INFO] |  |  |  |     \- asm:asm-tree:jar:3.1:compile
> [INFO] |  |  |  |        \- asm:asm:jar:3.1:compile
> [INFO] |  |  |  +-
> org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
> [INFO] |  |  |  +- joda-time:joda-time:jar:2.8.1:compile
> [INFO] |  |  |  +- org.json:json:jar:20160810:compile
> [INFO] |  |  |  +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
> [INFO] |  |  |  +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
> [INFO] |  |  |  \-
>
>
>
> com.github.joshelser:dropwizard-metrics-hadoop-metrics2-reporter:jar:0.1.2:compile
> [INFO] |  |  \- org.apache.hive:hive-llap-client:jar:2.1.1:compile
> [INFO] |  |     \- org.apache.hive:hive-llap-common:jar:2.1.1:compile
> [INFO] |  |        \- org.apache.hive:hive-serde:jar:2.1.1:compile
> [INFO] |  |           +- org.apache.hive:hive-service-rpc:jar:2.1.1:compile
> [INFO] |  |           |  +- tomcat:jasper-compiler:jar:5.5.23:compile
> [INFO] |  |           |  |  +- javax.servlet:jsp-api:jar:2.0:compile
> [INFO] |  |           |  |  \- ant:ant:jar:1.6.5:compile
> [INFO] |  |           |  +- tomcat:jasper-runtime:jar:5.5.23:compile
> [INFO] |  |           |  |  +- javax.servlet:servlet-api:jar:2.4:compile
> [INFO] |  |           |  |  \- commons-el:commons-el:jar:1.0:compile
> [INFO] |  |           |  \- org.apache.thrift:libfb303:jar:0.9.3:compile
> [INFO] |  |           +- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] |  |           |  \-
> com.thoughtworks.paranamer:paranamer:jar:2.3:compile
> [INFO] |  |           +- net.sf.opencsv:opencsv:jar:2.3:compile
> [INFO] |  |           \-
> org.apache.parquet:parquet-hadoop-bundle:jar:1.8.1:compile
> [INFO] |  +- org.apache.hive:hive-shims:jar:2.1.1:compile
> [INFO] |  |  +- org.apache.hive.shims:hive-shims-common:jar:2.1.1:compile
> [INFO] |  |  |  \- org.apache.thrift:libthrift:jar:0.9.3:compile
> [INFO] |  |  +- org.apache.hive.shims:hive-shims-0.23:jar:2.1.1:runtime
> [INFO] |  |  |  \-
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.6.1:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-annotations:jar:2.6.1:runtime
> [INFO] |  |  |     +-
> com.google.inject.extensions:guice-servlet:jar:3.0:runtime
> [INFO] |  |  |     +- com.google.inject:guice:jar:3.0:runtime
> [INFO] |  |  |     |  +- javax.inject:javax.inject:jar:1:runtime
> [INFO] |  |  |     |  \- aopalliance:aopalliance:jar:1.0:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-json:jar:1.9:runtime
> [INFO] |  |  |     |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:runtime
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-core-asl:jar:1.8.3:compile
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-mapper-asl:jar:1.8.3:compile
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:runtime
> [INFO] |  |  |     |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:runtime
> [INFO] |  |  |     +- com.sun.jersey.contribs:jersey-guice:jar:1.9:runtime
> [INFO] |  |  |     |  \- com.sun.jersey:jersey-server:jar:1.9:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-yarn-common:jar:2.6.1:runtime
> [INFO] |  |  |     +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.1:runtime
> [INFO] |  |  |     +- javax.xml.bind:jaxb-api:jar:2.2.2:runtime
> [INFO] |  |  |     |  \- javax.xml.stream:stax-api:jar:1.0-2:runtime
> [INFO] |  |  |     +- org.codehaus.jettison:jettison:jar:1.1:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-core:jar:1.9:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-client:jar:1.9:runtime
> [INFO] |  |  |     +- org.mortbay.jetty:jetty-util:jar:6.1.26:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.1:runtime
> [INFO] |  |  |     |  \-
> org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:runtime
> [INFO] |  |  |     +-
>
>
>
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:2.6.1:runtime
> [INFO] |  |  |     \-
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:2.6.1:runtime
> [INFO] |  |  |        \- org.mortbay.jetty:jetty:jar:6.1.26:runtime
> [INFO] |  |  \-
> org.apache.hive.shims:hive-shims-scheduler:jar:2.1.1:runtime
> [INFO] |  +- commons-httpclient:commons-httpclient:jar:3.0.1:compile
> [INFO] |  +- org.antlr:antlr-runtime:jar:3.4:compile
> [INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:compile
> [INFO] |  |  \- antlr:antlr:jar:2.7.7:compile
> [INFO] |  +- org.antlr:ST4:jar:4.0.4:compile
> [INFO] |  +- org.apache.ant:ant:jar:1.9.1:compile
> [INFO] |  |  \- org.apache.ant:ant-launcher:jar:1.9.1:compile
> [INFO] |  +- org.apache.commons:commons-compress:jar:1.10:compile
> [INFO] |  +- org.apache.ivy:ivy:jar:2.4.0:compile
> [INFO] |  +- org.apache.curator:curator-framework:jar:2.6.0:compile
> [INFO] |  |  \- org.apache.curator:curator-client:jar:2.6.0:compile
> [INFO] |  +- org.apache.curator:apache-curator:pom:2.6.0:compile
> [INFO] |  +- org.codehaus.groovy:groovy-all:jar:2.4.4:compile
> [INFO] |  +- org.apache.calcite:calcite-core:jar:1.6.0:compile
> [INFO] |  |  +- org.apache.calcite:calcite-linq4j:jar:1.6.0:compile
> [INFO] |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile
> [INFO] |  |  |  \- commons-pool:commons-pool:jar:1.5.4:compile
> [INFO] |  |  +- net.hydromatic:aggdesigner-algorithm:jar:6.0:compile
> [INFO] |  |  \- org.codehaus.janino:commons-compiler:jar:2.7.6:compile
> [INFO] |  +- org.apache.calcite:calcite-avatica:jar:1.6.0:compile
> [INFO] |  +- stax:stax-api:jar:1.0.1:compile
> [INFO] |  \- jline:jline:jar:2.12:compile
> [INFO] +- org.datanucleus:datanucleus-core:jar:4.1.6:compile
> [INFO] +- org.datanucleus:datanucleus-api-jdo:jar:4.2.4:compile
> [INFO] +- org.datanucleus:javax.jdo:jar:3.2.0-m3:compile
> [INFO] |  \- javax.transaction:transaction-api:jar:1.1:compile
> [INFO] +- org.datanucleus:datanucleus-rdbms:jar:4.1.9:compile
> [INFO] +- hadoop-lzo:hadoop-lzo:jar:0.4.14:compile
> [INFO] \- org.apache.flink:flink-runtime-web_2.11:jar:1.9.1:provided
> [INFO]    +- org.apache.flink:flink-runtime_2.11:jar:1.9.1:compile
> [INFO]    |  +-
> org.apache.flink:flink-queryable-state-client-java:jar:1.9.1:compile
> [INFO]    |  +- org.apache.flink:flink-hadoop-fs:jar:1.9.1:compile
> [INFO]    |  +- org.apache.flink:flink-shaded-asm-6:jar:6.2.1-7.0:compile
> [INFO]    |  +- com.typesafe.akka:akka-stream_2.11:jar:2.5.21:compile
> [INFO]    |  |  +- org.reactivestreams:reactive-streams:jar:1.0.2:compile
> [INFO]    |  |  \- com.typesafe:ssl-config-core_2.11:jar:0.3.7:compile
> [INFO]    |  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21:compile
> [INFO]    |  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.11:compile
> [INFO]    |  +- org.clapper:grizzled-slf4j_2.11:jar:1.3.2:compile
> [INFO]    |  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
> [INFO]    |  +- org.xerial.snappy:snappy-java:jar:1.1.4:compile
> [INFO]    |  \- com.twitter:chill_2.11:jar:0.7.6:compile
> [INFO]    |     \- com.twitter:chill-java:jar:0.7.6:compile
> [INFO]    +-
> org.apache.flink:flink-shaded-netty:jar:4.1.32.Final-7.0:compile
> [INFO]    +- org.apache.flink:flink-shaded-guava:jar:18.0-7.0:compile
> [INFO]    \- org.javassist:javassist:jar:3.19.0-GA:compile
> [INFO] ————————————————————————————————————
>
> 日志:
>
> 2020-02-28 17:17:07,890 INFO
> org.apache.hadoop.security.UserGroupInformation               - Login
> successful for user ***/dev@***.COM using keytab file /home/***/key.keytab
> 上面这条是flink日志打印的,从这条日志可以看出 kerberos认证是通过的,能够正常登录,但还是报了以下异常:
> 2020-02-28 17:17:08,658 INFO  org.apache.hadoop.hive.metastore.ObjectStore
> - Setting MetaStore object pin classes with
> hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
> 2020-02-28 17:17:09,280 INFO
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql           - Using
> direct SQL, underlying DB is MYSQL
> 2020-02-28 17:17:09,283 INFO  org.apache.hadoop.hive.metastore.ObjectStore
> - Initialized ObjectStore
> 2020-02-28 17:17:09,450 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
> admin role in metastore
> 2020-02-28 17:17:09,452 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
> public role in metastore
> 2020-02-28 17:17:09,474 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - No user is
> added in admin role, since config is empty
> 2020-02-28 17:17:09,634 INFO
> org.apache.flink.table.catalog.hive.HiveCatalog               - Connected
> to Hive metastore
> 2020-02-28 17:17:09,635 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - 0:
> get_database: ***
> 2020-02-28 17:17:09,637 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit          - ugi=***
> ip=unknown-ip-addr cmd=get_database: ***
> 2020-02-28 17:17:09,658 INFO  org.apache.hadoop.hive.ql.metadata.HiveUtils
> - Adding metastore authorization provider:
>
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
> 2020-02-28 17:17:10,166 WARN
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory       - The
> short-circuit local reads feature cannot be used because libhadoop cannot
> be loaded.
> 2020-02-28 17:17:10,391 WARN  org.apache.hadoop.ipc.Client
> - Exception encountered while connecting to the server :
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 2020-02-28 17:17:10,397 WARN  org.apache.hadoop.ipc.Client
> - Exception encountered while connecting to the server :
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 2020-02-28 17:17:10,398 INFO
> org.apache.hadoop.io.retry.RetryInvocationHandler             - Exception
> while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over
> ******.org/***.***.***.***:8020 after 1 fail over attempts. Trying to fail
> over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "***.***.***.org/***.***.***.***"; destination host is: "******.org":8020;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1480)
> at org.apache.hadoop.ipc.Client.call(Client.java:1413)
> at
>
>
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy41.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
> getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at
>
>
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy42.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
> at
>
>
>
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at
>
>
>
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at
>
>
>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
>
>
>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at
>
>
>
> org.apache.hadoop.hive.common.FileUtils.getFileStatusOrNull(FileUtils.java:770)
> at
>
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:368)
> at
>
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:343)
> at
>
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:152)
> at
>
>
>
> org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadDatabase(AuthorizationPreEventListener.java:204)
> at
>
>
>
> org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:152)
> at
>
>
>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2153)
> at
>
>
>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:932)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
> at
>
>
>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
> at com.sun.proxy.$Proxy35.get_database(Unknown Source)
> at
>
>
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1280)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:150)
> at com.sun.proxy.$Proxy36.getDatabase(Unknown Source)
> at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.
> getDatabase(HiveMetastoreClientWrapper.java:102)
> at
>
>
>
> org.apache.flink.table.catalog.hive.HiveCatalog.databaseExists(HiveCatalog.java:347)
> at
> org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:244)
> at
>
>
>
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
> at
>
>
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
>
>
>
>
> ……在这段省略的代码里做了UserGroupInformation.loginUserFromKeytab(principal,keytab);并成功通过认证
> at this is my code.main(MyMainClass.java:24)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
>
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
> at
>
>
>
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
> at
>
>
>
> org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
> at
>
>
>
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
> at
>
>
>
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
> at
>
>
>
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
> at
>
>
>
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
>
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
>
>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
>
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
>
>
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> ... 67 more
> Caused by: org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> at
>
>
>
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at
>
>
>
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at
>
>
>
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
>
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
> ... 70 more
> 目前诊断看起来像是jar被污染导致。麻烦请指点一二。谢谢!
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <
>
>
> https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D
>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年02月28日 15:16,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> Hi 叶贤勋,
>
>
>
>
>
> 我手头上没有kerberos的环境,从TokenCache的代码(2.7.5版本)看起来,这个异常可能是因为没有正确拿到RM的地址或者principal。请检查一下下面这几个配置:
> mapreduce.framework.name
> yarn.resourcemanager.address
> yarn.resourcemanager.principal
> 以及你的flink的作业是否能读到这些配置
>
> On Fri, Feb 28, 2020 at 11:10 AM Kurt Young <yk...@gmail.com> wrote:
>
> cc @lirui@apache.org <li...@apache.org>
>
> Best,
> Kurt
>
>
> On Thu, Feb 13, 2020 at 10:22 AM 叶贤勋 <yx...@163.com> wrote:
>
> Hi 大家好:
> 在做hive2.1.1 source带Kerberos认证有个异常请教下大家。
> flink 版本1.9
> hive 版本2.1.1,实现了HiveShimV211。
> 代码:
> public class HiveCatalogTest {
> private static final Logger LOG =
> LoggerFactory.getLogger(HiveCatalogTest.class);
> private String hiveConfDir = "/Users/yexianxun/dev/env/test-hive"; //
> a local path
> private TableEnvironment tableEnv;
> private HiveCatalog hive;
> private String hiveName;
> private String hiveDB;
> private String version;
>
>
> @Before
> public void before() {
> EnvironmentSettings settings =
> EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inBatchMode()
> .build();
> tableEnv = TableEnvironment.create(settings);
> hiveName = "myhive";
> hiveDB = "sloth";
> version = "2.1.1";
> }
>
>
> @Test
> public void testCatalogQuerySink() throws Exception {
> hive = new HiveCatalog(hiveName, hiveDB, hiveConfDir, version);
> System.setProperty("java.security.krb5.conf", hiveConfDir +
> "/krb5.conf");
> tableEnv.getConfig().getConfiguration().setString("stream_mode",
> "false");
> tableEnv.registerCatalog(hiveName, hive);
> tableEnv.useCatalog(hiveName);
> String query = "select * from " + hiveName + "." + hiveDB +
> ".testtbl2 where id = 20200202";
> Table table = tableEnv.sqlQuery(query);
> String newTableName = "testtbl2_1";
> table.insertInto(hiveName, hiveDB, newTableName);
> tableEnv.execute("test");
> }
> }
>
>
> HiveMetastoreClientFactory:
> public static HiveMetastoreClientWrapper create(HiveConf hiveConf,
> String hiveVersion) {
> Preconditions.checkNotNull(hiveVersion, "Hive version cannot be
> null");
> if (System.getProperty("java.security.krb5.conf") != null) {
> if (System.getProperty("had_set_kerberos") == null) {
> String principal = "sloth/dev@BDMS.163.COM";
> String keytab =
> "/Users/yexianxun/dev/env/mammut-test-hive/sloth.keytab";
> try {
> sun.security.krb5.Config.refresh();
> UserGroupInformation.setConfiguration(hiveConf);
> UserGroupInformation.loginUserFromKeytab(principal,
> keytab);
> System.setProperty("had_set_kerberos", "true");
> } catch (Exception e) {
> LOG.error("", e);
> }
> }
> }
> return new HiveMetastoreClientWrapper(hiveConf, hiveVersion);
> }
>
>
> HiveCatalog:
> private static HiveConf createHiveConf(@Nullable String hiveConfDir) {
> LOG.info("Setting hive conf dir as {}", hiveConfDir);
> try {
> HiveConf.setHiveSiteLocation(
> hiveConfDir == null ?
> null : Paths.get(hiveConfDir,
> "hive-site.xml").toUri().toURL());
> } catch (MalformedURLException e) {
> throw new CatalogException(
> String.format("Failed to get hive-site.xml from %s",
> hiveConfDir), e);
> }
>
>
> // create HiveConf from hadoop configuration
> HiveConf hiveConf = new
> HiveConf(HadoopUtils.getHadoopConfiguration(new
> org.apache.flink.configuration.Configuration()),
> HiveConf.class);
> try {
> hiveConf.addResource(Paths.get(hiveConfDir,
> "hdfs-site.xml").toUri().toURL());
> hiveConf.addResource(Paths.get(hiveConfDir,
> "core-site.xml").toUri().toURL());
> } catch (MalformedURLException e) {
> throw new CatalogException(String.format("Failed to get
> hdfs|core-site.xml from %s", hiveConfDir), e);
> }
> return hiveConf;
> }
>
>
> 在执行testCatalogQuerySink方法报以下错误:
> org.apache.flink.runtime.client.JobExecutionException: Could not retrieve
> JobResult.
>
>
> at
>
>
>
>
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:622)
> at
>
>
>
>
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
> at
>
>
>
>
> org.apache.flink.table.planner.delegation.BatchExecutor.execute(BatchExecutor.java:55)
> at
>
>
>
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:410)
> at api.HiveCatalogTest.testCatalogQuerySink(HiveCatalogMumTest.java:234)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
>
>
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at
>
>
>
>
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
>
>
>
>
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at
>
>
>
>
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at
>
>
>
>
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at
>
>
>
>
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at
>
>
>
>
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at
>
>
>
>
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at
>
>
>
>
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at
>
>
>
>
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed
> to submit job.
> at
>
>
>
>
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
> at
>
>
>
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> at
>
>
>
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> at
>
>
>
>
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at
>
>
>
>
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
>
>
>
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
>
>
>
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.RuntimeException:
> org.apache.flink.runtime.client.JobExecutionException: Could not set up
> JobManager
> at
>
>
>
>
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> at
>
>
>
>
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> ... 6 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
> not set up JobManager
> at
>
>
>
>
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
> at
>
>
>
>
> org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
> at
>
>
>
>
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
> at
>
>
>
>
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> ... 7 more
> Caused by: org.apache.flink.runtime.JobException: Creating the input
> splits caused an error: Can't get Master Kerberos principal for use as
> renewer
> at
>
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:270)
> at
>
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:907)
> at
>
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:230)
> at
>
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
> at
>
>
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
> at
>
>
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
> at
>
>
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
> at
>
>
>
>
> org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
> at
>
>
>
>
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
> at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
> at
>
>
>
>
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> at
>
>
>
>
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> at
>
>
>
>
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
> ... 10 more
> Caused by: java.io.IOException: Can't get Master Kerberos principal for
> use as renewer
> at
>
>
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
> at
>
>
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at
>
>
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at
>
>
>
>
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
> at
>
>
>
>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
> at
>
>
>
>
> org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:159)
> at
>
>
>
>
> org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:63)
> at
>
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:256)
> ... 22 more
>
>
> 测试sink的方法是能够正常插入数据,但是在hive source时报这个错误,感觉是获取deleg
> token时返回空导致的。不知道具体应该怎么解决
>
>
>
>
>
> | |
> 叶贤勋
> |
> |
> yxx_cmhd@163.com
> |
> 签名由网易邮箱大师定制
>
>
>
>
>
>

回复: Hive Source With Kerberos认证问题

Posted by 叶贤勋 <yx...@163.com>.
在doAs方法中是可以的。我现在hive connector中操作hive涉及认证的代码都在doAs中执行,可以解决认证问题。
前面提到的stacktrace是用我们公司自己封装的hive-exec jar打印出来的,所以跟源码对应不上,我用官网的hive-exec-2.1.1.jar也是有这个问题。




| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制


在2020年03月5日 13:52,Rui Li<li...@apache.org> 写道:
能不能先用doAs的方式来试一下,比如注册HiveCatalog的部分在UserGroupInformation.getLoginUser().doAs()里做,排查下是不是HiveMetaStoreClient没有用上你登录用户的信息。
另外你的hive版本是2.1.1么?从stacktrace上来看跟2.1.1的代码对不上,比如
HiveMetaStoreClient.java的第562行:
https://github.com/apache/hive/blob/rel/release-2.1.1/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L562

On Wed, Mar 4, 2020 at 9:17 PM 叶贤勋 <yx...@163.com> wrote:

你好,
datanucleus jar的包的问题已经解决,之前应该是没有通过hive.metastore.uris进行连接访问HMS。
我在HiveCatalog的open方法里面做了Kerberos登录,
UserGroupInformation.loginUserFromKeytab(principal, keytabPath);
并且已经登录成功。按理说Kerberos登录成功后在这个进程就应该有权限访问metastore了吧。但是在创建megastore
client时报了以下错误。

2020-03-04 20:23:17,191 DEBUG
org.apache.flink.table.catalog.hive.HiveCatalog               - Hive
MetaStore Uris is thrift://***1:9083,thrift://***2:9083.
2020-03-04 20:23:17,192 INFO
org.apache.flink.table.catalog.hive.HiveCatalog               - Created
HiveCatalog 'myhive'
2020-03-04 20:23:17,360 INFO
org.apache.hadoop.security.UserGroupInformation               - Login
successful for user ***/dev@***.COM using keytab file
/Users/yexianxun/IdeaProjects/flink-1.9.0/build-target/examples/hive/kerberos/key.keytab
2020-03-04 20:23:17,360 DEBUG
org.apache.flink.table.catalog.hive.HiveCatalog               - login user
by kerberos, principal is ***/dev@***.CO, login is true
2020-03-04 20:23:17,374 INFO
org.apache.curator.framework.imps.CuratorFrameworkImpl        - Starting
2020-03-04 20:23:17,374 DEBUG org.apache.curator.CuratorZookeeperClient
- Starting
2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState
- Starting
2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState
- reset
2020-03-04 20:23:17,374 INFO  org.apache.zookeeper.ZooKeeper
- Initiating client connection,
connectString=***1:2181,***2:2181,***3:2181 sessionTimeout=60000
watcher=org.apache.curator.ConnectionState@6b52dd31
2020-03-04 20:23:17,379 DEBUG
org.apache.zookeeper.client.ZooKeeperSaslClient               - JAAS
loginContext is: HiveZooKeeperClient
2020-03-04 20:23:17,381 WARN  org.apache.zookeeper.ClientCnxn
- SASL configuration failed:
javax.security.auth.login.LoginException: Unable to obtain password from
user
Will continue connection to Zookeeper server without SASL authentication,
if Zookeeper server allows it.
2020-03-04 20:23:17,381 INFO  org.apache.zookeeper.ClientCnxn
- Opening socket connection to server ***1:2181
2020-03-04 20:23:17,381 ERROR org.apache.curator.ConnectionState
- Authentication failed
2020-03-04 20:23:17,384 INFO  org.apache.zookeeper.ClientCnxn
- Socket connection established to ***1:2181, initiating
session
2020-03-04 20:23:17,384 DEBUG org.apache.zookeeper.ClientCnxn
- Session establishment request sent on ***1:2181
2020-03-04 20:23:17,393 INFO  org.apache.zookeeper.ClientCnxn
- Session establishment complete on server ***1:2181,
sessionid = 0x16f7af0645c25a8, negotiated timeout = 40000
2020-03-04 20:23:17,393 INFO
org.apache.curator.framework.state.ConnectionStateManager     - State
change: CONNECTED
2020-03-04 20:23:17,397 DEBUG org.apache.zookeeper.ClientCnxn
- Reading reply sessionid:0x16f7af0645c25a8, packet::
clientPath:null serverPath:null finished:false header:: 1,3  replyHeader::
1,292064345364,0  request:: '/hive_base,F  response::
s{17179869635,17179869635,1527576303010,1527576303010,0,3,0,0,0,1,249117832596}
2020-03-04 20:23:17,400 DEBUG org.apache.zookeeper.ClientCnxn
- Reading reply sessionid:0x16f7af0645c25a8, packet::
clientPath:null serverPath:null finished:false header:: 2,12  replyHeader::
2,292064345364,0  request:: '/hive_base/namespaces/hive/uris,F  response::
v{'dGhyaWZ0Oi8vaHphZGctYmRtcy03LnNlcnZlci4xNjMub3JnOjkwODM=,'dGhyaWZ0Oi8vaHphZGctYmRtcy04LnNlcnZlci4xNjMub3JnOjkwODM=},s{17179869664,17179869664,1527576306106,1527576306106,0,1106,0,0,0,2,292063632993}
2020-03-04 20:23:17,401 INFO  hive.metastore
- atlasProxy is set to
2020-03-04 20:23:17,401 INFO  hive.metastore
- Trying to connect to metastore with URI thrift://
hzadg-bdms-7.server.163.org:9083
2020-03-04 20:23:17,408 INFO  hive.metastore
- tokenStrForm should not be null for querynull
2020-03-04 20:23:17,432 DEBUG org.apache.thrift.transport.TSaslTransport
- opening transport
org.apache.thrift.transport.TSaslClientTransport@3c69362a
2020-03-04 20:23:17,441 ERROR org.apache.thrift.transport.TSaslTransport
- SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
at
org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
at
org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
at
org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
......业务处理逻辑......
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
at
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
at
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: GSSException: No valid credentials provided (Mechanism level:
Failed to find any Kerberos tgt)
at
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 42 more
2020-03-04 20:23:17,443 DEBUG org.apache.thrift.transport.TSaslTransport
- CLIENT: Writing message with status BAD and payload
length 19
2020-03-04 20:23:17,445 WARN  hive.metastore
- Failed to connect to the MetaStore Server...
org.apache.thrift.transport.TTransportException: GSS initiate failed
at
org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
at
org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
at
org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
at
org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
at
org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
......业务处理逻辑......
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
at
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
at
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)


叶贤勋
yxx_cmhd@163.com

<https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D>
签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制

在2020年03月3日 19:04,Rui Li<li...@apache.org> <li...@apache.org> 写道:

datanucleus是在HMS端使用的,如果没有datanucleus会报错的话说明你的代码在尝试创建embedded
metastore。这是预期的行为么?我理解你们应该是有一个远端的HMS,然后希望HiveCatalog去连接这个HMS吧?

On Tue, Mar 3, 2020 at 4:00 PM 叶贤勋 <yx...@163.com> wrote:

hive conf应该是对的,前面UserGroupInfomation登录时都是成功的。
datanucleus的依赖不加的话,会报claas not found等异常。
1、java.lang.ClassNotFoundException:
org.datanucleus.api.jdo.JDOPersistenceManagerFactory
2、Caused by: org.datanucleus.exceptions.NucleusUserException: There is no
available StoreManager of type "rdbms". Please make sure you have specified
"datanucleus.storeManagerType" correctly and that all relevant plugins are
in the CLASSPATH


叶贤勋
yxx_cmhd@163.com

<
https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D

签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制

在2020年03月2日 11:50,Rui Li<li...@apache.org> <li...@apache.org> 写道:

从你贴的log来看似乎是创建了embedded metastore。可以检查一下HiveCatalog是不是读到了不正确的hive
conf?另外你贴的maven的这些依赖都打到你flink作业的jar里了么?像datanucleus的依赖应该是不需要的。

On Sat, Feb 29, 2020 at 10:42 PM 叶贤勋 <yx...@163.com> wrote:

Hi 李锐,感谢你的回复。
前面的问题通过设置yarn.resourcemanager.principal,已经解决。
但是现在出现另外一个问题,请帮忙看看。



背景:flink任务还是source&sink带有kerberos的hive,相同代码在本地进行测试是能通过kerberos认证,并且能够查询和插入数据到hive。但是任务提交到集群就报kerberos认证失败的错误。
Flink:1.9.1, flink-1.9.1/lib/有flink-dist_2.11-1.9.1.jar,
flink-shaded-hadoop-2-uber-2.7.5-7.0.jar,log4j-1.2.17.jar,
slf4j-log4j12-1.7.15.jar
Hive:2.1.1
flink任务主要依赖的jar:
[INFO] +- org.apache.flink:flink-table-api-java:jar:flink-1.9.1:compile
[INFO] |  +- org.apache.flink:flink-table-common:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-core:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-annotations:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-metrics-core:jar:flink-1.9.1:compile
[INFO] |  |     \- com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
[INFO] |  |        +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
[INFO] |  |        \- org.objenesis:objenesis:jar:2.1:compile
[INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
[INFO] |  \- org.apache.flink:force-shading:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-table-planner-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.scala-lang:scala-reflect:jar:2.11.12:compile
[INFO] |  |  \- org.scala-lang:scala-compiler:jar:2.11.12:compile
[INFO] |  +-
org.apache.flink:flink-table-api-java-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.apache.flink:flink-java:jar:flink-1.9.1:compile
[INFO] |  |  \-
org.apache.flink:flink-streaming-java_2.11:jar:1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-runtime-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.codehaus.janino:janino:jar:3.0.9:compile
[INFO] |  |  \- org.apache.calcite.avatica:avatica-core:jar:1.15.0:compile
[INFO] |  \- org.reflections:reflections:jar:0.9.10:compile
[INFO] +- org.apache.flink:flink-table-planner_2.11:jar:flink-1.9.1:compile
[INFO] +- org.apache.commons:commons-lang3:jar:3.9:compile
[INFO] +- com.typesafe.akka:akka-actor_2.11:jar:2.5.21:compile
[INFO] |  +- org.scala-lang:scala-library:jar:2.11.8:compile
[INFO] |  +- com.typesafe:config:jar:1.3.3:compile
[INFO] |  \-
org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
[INFO] +- org.apache.flink:flink-sql-client_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-clients_2.11:jar:1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-optimizer_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-streaming-scala_2.11:jar:1.9.1:compile
[INFO] |  +- log4j:log4j:jar:1.2.17:compile
[INFO] |  \- org.apache.flink:flink-shaded-jackson:jar:2.9.8-7.0:compile
[INFO] +- org.apache.flink:flink-json:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-csv:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-hbase_2.11:jar:1.9.1:compile
[INFO] +- org.apache.hbase:hbase-server:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-protobuf:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-netty:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.2.1:compile
[INFO] |  |  \-
com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
[INFO] |  +- org.apache.hbase:hbase-common:jar:2.2.1:compile
[INFO] |  |  \-
com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
[INFO] |  +- org.apache.hbase:hbase-http:jar:2.2.1:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-util:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-util-ajax:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-http:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-security:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.glassfish.jersey.core:jersey-server:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile
[INFO] |  |  |  |  +-
org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-client:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.media:jersey-media-jaxb:jar:2.25.1:compile
[INFO] |  |  |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] |  |  |  +- org.glassfish.hk2:hk2-api:jar:2.5.0-b32:compile
[INFO] |  |  |  |  +- org.glassfish.hk2:hk2-utils:jar:2.5.0-b32:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32:compile
[INFO] |  |  |  +-
org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile
[INFO] |  |  |  \- org.glassfish.hk2:hk2-locator:jar:2.5.0-b32:compile
[INFO] |  |  +-


org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile
[INFO] |  |  \- javax.ws.rs:javax.ws.rs-api:jar:2.0.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol-shaded:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-procedure:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-client:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-zookeeper:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-replication:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics-api:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics:jar:2.2.1:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.10:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop-compat:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop2-compat:jar:2.2.1:compile
[INFO] |  +- org.eclipse.jetty:jetty-server:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-io:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-servlet:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-webapp:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-xml:jar:9.3.27.v20190418:compile
[INFO] |  +- org.glassfish.web:javax.servlet.jsp:jar:2.3.2:compile
[INFO] |  |  \- org.glassfish:javax.el:jar:3.0.1-b11:compile (version
selected from constraint [3.0.0,))
[INFO] |  +- javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1:compile
[INFO] |  +- io.dropwizard.metrics:metrics-core:jar:3.2.6:compile
[INFO] |  +- commons-io:commons-io:jar:2.5:compile
[INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:compile
[INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
[INFO] |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile
[INFO] |  +- org.apache.htrace:htrace-core4:jar:4.2.0-incubating:compile
[INFO] |  +- com.lmax:disruptor:jar:3.3.6:compile
[INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
[INFO] |  +- org.apache.commons:commons-crypto:jar:1.0.0:compile
[INFO] |  +- org.apache.hadoop:hadoop-distcp:jar:2.8.5:compile
[INFO] |  \- org.apache.yetus:audience-annotations:jar:0.5.0:compile
[INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
[INFO] +- mysql:mysql-connector-java:jar:8.0.18:compile
[INFO] +- org.apache.flink:flink-connector-hive_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-hadoop-compatibility_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-shaded-hadoop-2-uber:jar:2.7.5-7.0:provided
[INFO] +- org.apache.hive:hive-exec:jar:2.1.1:compile
[INFO] |  +- org.apache.hive:hive-ant:jar:2.1.1:compile
[INFO] |  |  \- org.apache.velocity:velocity:jar:1.5:compile
[INFO] |  |     \- oro:oro:jar:2.0.8:compile
[INFO] |  +- org.apache.hive:hive-llap-tez:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive:hive-common:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-storage-api:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-orc:jar:2.1.1:compile
[INFO] |  |  |  |  \- org.iq80.snappy:snappy:jar:0.2:compile
[INFO] |  |  |  +-
org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jta_1.1_spec:jar:1.1.1:compile
[INFO] |  |  |  |  +- javax.mail:mail:jar:1.4.1:compile
[INFO] |  |  |  |  +- javax.activation:activation:jar:1.1:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jaspic_1.0_spec:jar:1.0:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-annotation_1.0_spec:jar:1.1.1:compile
[INFO] |  |  |  |  \- asm:asm-commons:jar:3.1:compile
[INFO] |  |  |  |     \- asm:asm-tree:jar:3.1:compile
[INFO] |  |  |  |        \- asm:asm:jar:3.1:compile
[INFO] |  |  |  +-
org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
[INFO] |  |  |  +- joda-time:joda-time:jar:2.8.1:compile
[INFO] |  |  |  +- org.json:json:jar:20160810:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
[INFO] |  |  |  \-


com.github.joshelser:dropwizard-metrics-hadoop-metrics2-reporter:jar:0.1.2:compile
[INFO] |  |  \- org.apache.hive:hive-llap-client:jar:2.1.1:compile
[INFO] |  |     \- org.apache.hive:hive-llap-common:jar:2.1.1:compile
[INFO] |  |        \- org.apache.hive:hive-serde:jar:2.1.1:compile
[INFO] |  |           +- org.apache.hive:hive-service-rpc:jar:2.1.1:compile
[INFO] |  |           |  +- tomcat:jasper-compiler:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:jsp-api:jar:2.0:compile
[INFO] |  |           |  |  \- ant:ant:jar:1.6.5:compile
[INFO] |  |           |  +- tomcat:jasper-runtime:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:servlet-api:jar:2.4:compile
[INFO] |  |           |  |  \- commons-el:commons-el:jar:1.0:compile
[INFO] |  |           |  \- org.apache.thrift:libfb303:jar:0.9.3:compile
[INFO] |  |           +- org.apache.avro:avro:jar:1.7.7:compile
[INFO] |  |           |  \-
com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO] |  |           +- net.sf.opencsv:opencsv:jar:2.3:compile
[INFO] |  |           \-
org.apache.parquet:parquet-hadoop-bundle:jar:1.8.1:compile
[INFO] |  +- org.apache.hive:hive-shims:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-common:jar:2.1.1:compile
[INFO] |  |  |  \- org.apache.thrift:libthrift:jar:0.9.3:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-0.23:jar:2.1.1:runtime
[INFO] |  |  |  \-
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.6.1:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-annotations:jar:2.6.1:runtime
[INFO] |  |  |     +-
com.google.inject.extensions:guice-servlet:jar:3.0:runtime
[INFO] |  |  |     +- com.google.inject:guice:jar:3.0:runtime
[INFO] |  |  |     |  +- javax.inject:javax.inject:jar:1:runtime
[INFO] |  |  |     |  \- aopalliance:aopalliance:jar:1.0:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-json:jar:1.9:runtime
[INFO] |  |  |     |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:runtime
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-core-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-mapper-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:runtime
[INFO] |  |  |     |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:runtime
[INFO] |  |  |     +- com.sun.jersey.contribs:jersey-guice:jar:1.9:runtime
[INFO] |  |  |     |  \- com.sun.jersey:jersey-server:jar:1.9:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-common:jar:2.6.1:runtime
[INFO] |  |  |     +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.1:runtime
[INFO] |  |  |     +- javax.xml.bind:jaxb-api:jar:2.2.2:runtime
[INFO] |  |  |     |  \- javax.xml.stream:stax-api:jar:1.0-2:runtime
[INFO] |  |  |     +- org.codehaus.jettison:jettison:jar:1.1:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-core:jar:1.9:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-client:jar:1.9:runtime
[INFO] |  |  |     +- org.mortbay.jetty:jetty-util:jar:6.1.26:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.1:runtime
[INFO] |  |  |     |  \-
org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:runtime
[INFO] |  |  |     +-


org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:2.6.1:runtime
[INFO] |  |  |     \-
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:2.6.1:runtime
[INFO] |  |  |        \- org.mortbay.jetty:jetty:jar:6.1.26:runtime
[INFO] |  |  \-
org.apache.hive.shims:hive-shims-scheduler:jar:2.1.1:runtime
[INFO] |  +- commons-httpclient:commons-httpclient:jar:3.0.1:compile
[INFO] |  +- org.antlr:antlr-runtime:jar:3.4:compile
[INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:compile
[INFO] |  |  \- antlr:antlr:jar:2.7.7:compile
[INFO] |  +- org.antlr:ST4:jar:4.0.4:compile
[INFO] |  +- org.apache.ant:ant:jar:1.9.1:compile
[INFO] |  |  \- org.apache.ant:ant-launcher:jar:1.9.1:compile
[INFO] |  +- org.apache.commons:commons-compress:jar:1.10:compile
[INFO] |  +- org.apache.ivy:ivy:jar:2.4.0:compile
[INFO] |  +- org.apache.curator:curator-framework:jar:2.6.0:compile
[INFO] |  |  \- org.apache.curator:curator-client:jar:2.6.0:compile
[INFO] |  +- org.apache.curator:apache-curator:pom:2.6.0:compile
[INFO] |  +- org.codehaus.groovy:groovy-all:jar:2.4.4:compile
[INFO] |  +- org.apache.calcite:calcite-core:jar:1.6.0:compile
[INFO] |  |  +- org.apache.calcite:calcite-linq4j:jar:1.6.0:compile
[INFO] |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile
[INFO] |  |  |  \- commons-pool:commons-pool:jar:1.5.4:compile
[INFO] |  |  +- net.hydromatic:aggdesigner-algorithm:jar:6.0:compile
[INFO] |  |  \- org.codehaus.janino:commons-compiler:jar:2.7.6:compile
[INFO] |  +- org.apache.calcite:calcite-avatica:jar:1.6.0:compile
[INFO] |  +- stax:stax-api:jar:1.0.1:compile
[INFO] |  \- jline:jline:jar:2.12:compile
[INFO] +- org.datanucleus:datanucleus-core:jar:4.1.6:compile
[INFO] +- org.datanucleus:datanucleus-api-jdo:jar:4.2.4:compile
[INFO] +- org.datanucleus:javax.jdo:jar:3.2.0-m3:compile
[INFO] |  \- javax.transaction:transaction-api:jar:1.1:compile
[INFO] +- org.datanucleus:datanucleus-rdbms:jar:4.1.9:compile
[INFO] +- hadoop-lzo:hadoop-lzo:jar:0.4.14:compile
[INFO] \- org.apache.flink:flink-runtime-web_2.11:jar:1.9.1:provided
[INFO]    +- org.apache.flink:flink-runtime_2.11:jar:1.9.1:compile
[INFO]    |  +-
org.apache.flink:flink-queryable-state-client-java:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-hadoop-fs:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-shaded-asm-6:jar:6.2.1-7.0:compile
[INFO]    |  +- com.typesafe.akka:akka-stream_2.11:jar:2.5.21:compile
[INFO]    |  |  +- org.reactivestreams:reactive-streams:jar:1.0.2:compile
[INFO]    |  |  \- com.typesafe:ssl-config-core_2.11:jar:0.3.7:compile
[INFO]    |  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21:compile
[INFO]    |  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.11:compile
[INFO]    |  +- org.clapper:grizzled-slf4j_2.11:jar:1.3.2:compile
[INFO]    |  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
[INFO]    |  +- org.xerial.snappy:snappy-java:jar:1.1.4:compile
[INFO]    |  \- com.twitter:chill_2.11:jar:0.7.6:compile
[INFO]    |     \- com.twitter:chill-java:jar:0.7.6:compile
[INFO]    +-
org.apache.flink:flink-shaded-netty:jar:4.1.32.Final-7.0:compile
[INFO]    +- org.apache.flink:flink-shaded-guava:jar:18.0-7.0:compile
[INFO]    \- org.javassist:javassist:jar:3.19.0-GA:compile
[INFO] ————————————————————————————————————

日志:

2020-02-28 17:17:07,890 INFO
org.apache.hadoop.security.UserGroupInformation               - Login
successful for user ***/dev@***.COM using keytab file /home/***/key.keytab
上面这条是flink日志打印的,从这条日志可以看出 kerberos认证是通过的,能够正常登录,但还是报了以下异常:
2020-02-28 17:17:08,658 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Setting MetaStore object pin classes with
hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
2020-02-28 17:17:09,280 INFO
org.apache.hadoop.hive.metastore.MetaStoreDirectSql           - Using
direct SQL, underlying DB is MYSQL
2020-02-28 17:17:09,283 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Initialized ObjectStore
2020-02-28 17:17:09,450 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
admin role in metastore
2020-02-28 17:17:09,452 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
public role in metastore
2020-02-28 17:17:09,474 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - No user is
added in admin role, since config is empty
2020-02-28 17:17:09,634 INFO
org.apache.flink.table.catalog.hive.HiveCatalog               - Connected
to Hive metastore
2020-02-28 17:17:09,635 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - 0:
get_database: ***
2020-02-28 17:17:09,637 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore.audit          - ugi=***
ip=unknown-ip-addr cmd=get_database: ***
2020-02-28 17:17:09,658 INFO  org.apache.hadoop.hive.ql.metadata.HiveUtils
- Adding metastore authorization provider:


org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
2020-02-28 17:17:10,166 WARN
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory       - The
short-circuit local reads feature cannot be used because libhadoop cannot
be loaded.
2020-02-28 17:17:10,391 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,397 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,398 INFO
org.apache.hadoop.io.retry.RetryInvocationHandler             - Exception
while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over
******.org/***.***.***.***:8020 after 1 fail over attempts. Trying to fail
over immediately.
java.io.IOException: Failed on local exception: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
"***.***.***.org/***.***.***.***"; destination host is: "******.org":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at


org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy41.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at


sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at


sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at


org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at


org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy42.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
at


org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at


org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at


org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at


org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at


org.apache.hadoop.hive.common.FileUtils.getFileStatusOrNull(FileUtils.java:770)
at


org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:368)
at


org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:343)
at


org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:152)
at


org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadDatabase(AuthorizationPreEventListener.java:204)
at


org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:152)
at


org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2153)
at


org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:932)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at


sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at


sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at


org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at


org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy35.get_database(Unknown Source)
at


org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at


sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at


sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at


org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:150)
at com.sun.proxy.$Proxy36.getDatabase(Unknown Source)
at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.
getDatabase(HiveMetastoreClientWrapper.java:102)
at


org.apache.flink.table.catalog.hive.HiveCatalog.databaseExists(HiveCatalog.java:347)
at
org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:244)
at


org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
at


org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)



……在这段省略的代码里做了UserGroupInformation.loginUserFromKeytab(principal,keytab);并成功通过认证
at this is my code.main(MyMainClass.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at


sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at


sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at


org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
at


org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at


org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
at


org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
at


org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at


org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
at


org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at


org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at


org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at


org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at


org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
... 67 more
Caused by: org.apache.hadoop.security.AccessControlException: Client
cannot authenticate via:[TOKEN, KERBEROS]
at


org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
at


org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at


org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at


org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
... 70 more
目前诊断看起来像是jar被污染导致。麻烦请指点一二。谢谢!

叶贤勋
yxx_cmhd@163.com

<

https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D


签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制

在2020年02月28日 15:16,Rui Li<li...@apache.org> <li...@apache.org> 写道:

Hi 叶贤勋,




我手头上没有kerberos的环境,从TokenCache的代码(2.7.5版本)看起来,这个异常可能是因为没有正确拿到RM的地址或者principal。请检查一下下面这几个配置:
mapreduce.framework.name
yarn.resourcemanager.address
yarn.resourcemanager.principal
以及你的flink的作业是否能读到这些配置

On Fri, Feb 28, 2020 at 11:10 AM Kurt Young <yk...@gmail.com> wrote:

cc @lirui@apache.org <li...@apache.org>

Best,
Kurt


On Thu, Feb 13, 2020 at 10:22 AM 叶贤勋 <yx...@163.com> wrote:

Hi 大家好:
在做hive2.1.1 source带Kerberos认证有个异常请教下大家。
flink 版本1.9
hive 版本2.1.1,实现了HiveShimV211。
代码:
public class HiveCatalogTest {
private static final Logger LOG =
LoggerFactory.getLogger(HiveCatalogTest.class);
private String hiveConfDir = "/Users/yexianxun/dev/env/test-hive"; //
a local path
private TableEnvironment tableEnv;
private HiveCatalog hive;
private String hiveName;
private String hiveDB;
private String version;


@Before
public void before() {
EnvironmentSettings settings =
EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inBatchMode()
.build();
tableEnv = TableEnvironment.create(settings);
hiveName = "myhive";
hiveDB = "sloth";
version = "2.1.1";
}


@Test
public void testCatalogQuerySink() throws Exception {
hive = new HiveCatalog(hiveName, hiveDB, hiveConfDir, version);
System.setProperty("java.security.krb5.conf", hiveConfDir +
"/krb5.conf");
tableEnv.getConfig().getConfiguration().setString("stream_mode",
"false");
tableEnv.registerCatalog(hiveName, hive);
tableEnv.useCatalog(hiveName);
String query = "select * from " + hiveName + "." + hiveDB +
".testtbl2 where id = 20200202";
Table table = tableEnv.sqlQuery(query);
String newTableName = "testtbl2_1";
table.insertInto(hiveName, hiveDB, newTableName);
tableEnv.execute("test");
}
}


HiveMetastoreClientFactory:
public static HiveMetastoreClientWrapper create(HiveConf hiveConf,
String hiveVersion) {
Preconditions.checkNotNull(hiveVersion, "Hive version cannot be
null");
if (System.getProperty("java.security.krb5.conf") != null) {
if (System.getProperty("had_set_kerberos") == null) {
String principal = "sloth/dev@BDMS.163.COM";
String keytab =
"/Users/yexianxun/dev/env/mammut-test-hive/sloth.keytab";
try {
sun.security.krb5.Config.refresh();
UserGroupInformation.setConfiguration(hiveConf);
UserGroupInformation.loginUserFromKeytab(principal,
keytab);
System.setProperty("had_set_kerberos", "true");
} catch (Exception e) {
LOG.error("", e);
}
}
}
return new HiveMetastoreClientWrapper(hiveConf, hiveVersion);
}


HiveCatalog:
private static HiveConf createHiveConf(@Nullable String hiveConfDir) {
LOG.info("Setting hive conf dir as {}", hiveConfDir);
try {
HiveConf.setHiveSiteLocation(
hiveConfDir == null ?
null : Paths.get(hiveConfDir,
"hive-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(
String.format("Failed to get hive-site.xml from %s",
hiveConfDir), e);
}


// create HiveConf from hadoop configuration
HiveConf hiveConf = new
HiveConf(HadoopUtils.getHadoopConfiguration(new
org.apache.flink.configuration.Configuration()),
HiveConf.class);
try {
hiveConf.addResource(Paths.get(hiveConfDir,
"hdfs-site.xml").toUri().toURL());
hiveConf.addResource(Paths.get(hiveConfDir,
"core-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(String.format("Failed to get
hdfs|core-site.xml from %s", hiveConfDir), e);
}
return hiveConf;
}


在执行testCatalogQuerySink方法报以下错误:
org.apache.flink.runtime.client.JobExecutionException: Could not retrieve
JobResult.


at



org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:622)
at



org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
at



org.apache.flink.table.planner.delegation.BatchExecutor.execute(BatchExecutor.java:55)
at



org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:410)
at api.HiveCatalogTest.testCatalogQuerySink(HiveCatalogMumTest.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at



sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at



sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at



org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at



org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at



org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at



org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at



org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at



org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at



org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at



com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at



com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at



com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed
to submit job.
at



org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
at



java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at



java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at



java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at



akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at



akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at



akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException:
org.apache.flink.runtime.client.JobExecutionException: Could not set up
JobManager
at



org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
at



java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
not set up JobManager
at



org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
at



org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
at



org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
at



org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
... 7 more
Caused by: org.apache.flink.runtime.JobException: Creating the input
splits caused an error: Can't get Master Kerberos principal for use as
renewer
at



org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:270)
at



org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:907)
at



org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:230)
at



org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
at



org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
at



org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
at



org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
at



org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
at



org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
at



org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at



org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at



org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
... 10 more
Caused by: java.io.IOException: Can't get Master Kerberos principal for
use as renewer
at



org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
at



org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at



org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at



org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
at



org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at



org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:159)
at



org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:63)
at



org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:256)
... 22 more


测试sink的方法是能够正常插入数据,但是在hive source时报这个错误,感觉是获取deleg
token时返回空导致的。不知道具体应该怎么解决





| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制






Re: Hive Source With Kerberos认证问题

Posted by Rui Li <li...@apache.org>.
能不能先用doAs的方式来试一下,比如注册HiveCatalog的部分在UserGroupInformation.getLoginUser().doAs()里做,排查下是不是HiveMetaStoreClient没有用上你登录用户的信息。
另外你的hive版本是2.1.1么?从stacktrace上来看跟2.1.1的代码对不上,比如
HiveMetaStoreClient.java的第562行:
https://github.com/apache/hive/blob/rel/release-2.1.1/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L562

On Wed, Mar 4, 2020 at 9:17 PM 叶贤勋 <yx...@163.com> wrote:

> 你好,
>         datanucleus jar的包的问题已经解决,之前应该是没有通过hive.metastore.uris进行连接访问HMS。
>         我在HiveCatalog的open方法里面做了Kerberos登录,
> UserGroupInformation.loginUserFromKeytab(principal, keytabPath);
>         并且已经登录成功。按理说Kerberos登录成功后在这个进程就应该有权限访问metastore了吧。但是在创建megastore
> client时报了以下错误。
>
> 2020-03-04 20:23:17,191 DEBUG
> org.apache.flink.table.catalog.hive.HiveCatalog               - Hive
> MetaStore Uris is thrift://***1:9083,thrift://***2:9083.
> 2020-03-04 20:23:17,192 INFO
>  org.apache.flink.table.catalog.hive.HiveCatalog               - Created
> HiveCatalog 'myhive'
> 2020-03-04 20:23:17,360 INFO
>  org.apache.hadoop.security.UserGroupInformation               - Login
> successful for user ***/dev@***.COM using keytab file
> /Users/yexianxun/IdeaProjects/flink-1.9.0/build-target/examples/hive/kerberos/key.keytab
> 2020-03-04 20:23:17,360 DEBUG
> org.apache.flink.table.catalog.hive.HiveCatalog               - login user
> by kerberos, principal is ***/dev@***.CO, login is true
> 2020-03-04 20:23:17,374 INFO
>  org.apache.curator.framework.imps.CuratorFrameworkImpl        - Starting
> 2020-03-04 20:23:17,374 DEBUG org.apache.curator.CuratorZookeeperClient
>                   - Starting
> 2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState
>                  - Starting
> 2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState
>                  - reset
> 2020-03-04 20:23:17,374 INFO  org.apache.zookeeper.ZooKeeper
>                  - Initiating client connection,
> connectString=***1:2181,***2:2181,***3:2181 sessionTimeout=60000
> watcher=org.apache.curator.ConnectionState@6b52dd31
> 2020-03-04 20:23:17,379 DEBUG
> org.apache.zookeeper.client.ZooKeeperSaslClient               - JAAS
> loginContext is: HiveZooKeeperClient
> 2020-03-04 20:23:17,381 WARN  org.apache.zookeeper.ClientCnxn
>                   - SASL configuration failed:
> javax.security.auth.login.LoginException: Unable to obtain password from
> user
>  Will continue connection to Zookeeper server without SASL authentication,
> if Zookeeper server allows it.
> 2020-03-04 20:23:17,381 INFO  org.apache.zookeeper.ClientCnxn
>                   - Opening socket connection to server ***1:2181
> 2020-03-04 20:23:17,381 ERROR org.apache.curator.ConnectionState
>                  - Authentication failed
> 2020-03-04 20:23:17,384 INFO  org.apache.zookeeper.ClientCnxn
>                   - Socket connection established to ***1:2181, initiating
> session
> 2020-03-04 20:23:17,384 DEBUG org.apache.zookeeper.ClientCnxn
>                   - Session establishment request sent on ***1:2181
> 2020-03-04 20:23:17,393 INFO  org.apache.zookeeper.ClientCnxn
>                   - Session establishment complete on server ***1:2181,
> sessionid = 0x16f7af0645c25a8, negotiated timeout = 40000
> 2020-03-04 20:23:17,393 INFO
>  org.apache.curator.framework.state.ConnectionStateManager     - State
> change: CONNECTED
> 2020-03-04 20:23:17,397 DEBUG org.apache.zookeeper.ClientCnxn
>                   - Reading reply sessionid:0x16f7af0645c25a8, packet::
> clientPath:null serverPath:null finished:false header:: 1,3  replyHeader::
> 1,292064345364,0  request:: '/hive_base,F  response::
> s{17179869635,17179869635,1527576303010,1527576303010,0,3,0,0,0,1,249117832596}
> 2020-03-04 20:23:17,400 DEBUG org.apache.zookeeper.ClientCnxn
>                   - Reading reply sessionid:0x16f7af0645c25a8, packet::
> clientPath:null serverPath:null finished:false header:: 2,12  replyHeader::
> 2,292064345364,0  request:: '/hive_base/namespaces/hive/uris,F  response::
> v{'dGhyaWZ0Oi8vaHphZGctYmRtcy03LnNlcnZlci4xNjMub3JnOjkwODM=,'dGhyaWZ0Oi8vaHphZGctYmRtcy04LnNlcnZlci4xNjMub3JnOjkwODM=},s{17179869664,17179869664,1527576306106,1527576306106,0,1106,0,0,0,2,292063632993}
> 2020-03-04 20:23:17,401 INFO  hive.metastore
>                  - atlasProxy is set to
> 2020-03-04 20:23:17,401 INFO  hive.metastore
>                  - Trying to connect to metastore with URI thrift://
> hzadg-bdms-7.server.163.org:9083
> 2020-03-04 20:23:17,408 INFO  hive.metastore
>                  - tokenStrForm should not be null for querynull
> 2020-03-04 20:23:17,432 DEBUG org.apache.thrift.transport.TSaslTransport
>                  - opening transport
> org.apache.thrift.transport.TSaslClientTransport@3c69362a
> 2020-03-04 20:23:17,441 ERROR org.apache.thrift.transport.TSaslTransport
>                  - SASL negotiation failure
> javax.security.sasl.SaslException: GSS initiate failed [Caused by
> GSSException: No valid credentials provided (Mechanism level: Failed to
> find any Kerberos tgt)]
>   at
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
>   at
> org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
>   at
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
>   at
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>   at
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>   at
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>   at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
>   at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
>   at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
>   at
> org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
>   at
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
>   at
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
>   at
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
>   at
> org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
>   at
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
>   at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
>   ......业务处理逻辑......
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: GSSException: No valid credentials provided (Mechanism level:
> Failed to find any Kerberos tgt)
>   at
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>   at
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
>   at
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>   at
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
>   at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
>   at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
>   at
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
>   ... 42 more
> 2020-03-04 20:23:17,443 DEBUG org.apache.thrift.transport.TSaslTransport
>                  - CLIENT: Writing message with status BAD and payload
> length 19
> 2020-03-04 20:23:17,445 WARN  hive.metastore
>                  - Failed to connect to the MetaStore Server...
> org.apache.thrift.transport.TTransportException: GSS initiate failed
>   at
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>   at
> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
>   at
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>   at
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>   at
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>   at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
>   at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
>   at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
>   at
> org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
>   at
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
>   at
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
>   at
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
>   at
> org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
>   at
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
>   at
> org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
>   ......业务处理逻辑......
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
>   at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
>   at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
>   at
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
>   at
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
>   at
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
>   at
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
>
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年03月3日 19:04,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> datanucleus是在HMS端使用的,如果没有datanucleus会报错的话说明你的代码在尝试创建embedded
> metastore。这是预期的行为么?我理解你们应该是有一个远端的HMS,然后希望HiveCatalog去连接这个HMS吧?
>
> On Tue, Mar 3, 2020 at 4:00 PM 叶贤勋 <yx...@163.com> wrote:
>
> hive conf应该是对的,前面UserGroupInfomation登录时都是成功的。
> datanucleus的依赖不加的话,会报claas not found等异常。
> 1、java.lang.ClassNotFoundException:
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory
> 2、Caused by: org.datanucleus.exceptions.NucleusUserException: There is no
> available StoreManager of type "rdbms". Please make sure you have specified
> "datanucleus.storeManagerType" correctly and that all relevant plugins are
> in the CLASSPATH
>
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <
> https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D
> >
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年03月2日 11:50,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> 从你贴的log来看似乎是创建了embedded metastore。可以检查一下HiveCatalog是不是读到了不正确的hive
> conf?另外你贴的maven的这些依赖都打到你flink作业的jar里了么?像datanucleus的依赖应该是不需要的。
>
> On Sat, Feb 29, 2020 at 10:42 PM 叶贤勋 <yx...@163.com> wrote:
>
> Hi 李锐,感谢你的回复。
> 前面的问题通过设置yarn.resourcemanager.principal,已经解决。
> 但是现在出现另外一个问题,请帮忙看看。
>
>
>
> 背景:flink任务还是source&sink带有kerberos的hive,相同代码在本地进行测试是能通过kerberos认证,并且能够查询和插入数据到hive。但是任务提交到集群就报kerberos认证失败的错误。
> Flink:1.9.1, flink-1.9.1/lib/有flink-dist_2.11-1.9.1.jar,
> flink-shaded-hadoop-2-uber-2.7.5-7.0.jar,log4j-1.2.17.jar,
> slf4j-log4j12-1.7.15.jar
> Hive:2.1.1
> flink任务主要依赖的jar:
> [INFO] +- org.apache.flink:flink-table-api-java:jar:flink-1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-table-common:jar:flink-1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-core:jar:flink-1.9.1:compile
> [INFO] |  |     +-
> org.apache.flink:flink-annotations:jar:flink-1.9.1:compile
> [INFO] |  |     +-
> org.apache.flink:flink-metrics-core:jar:flink-1.9.1:compile
> [INFO] |  |     \- com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [INFO] |  |        +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
> [INFO] |  |        \- org.objenesis:objenesis:jar:2.1:compile
> [INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [INFO] |  \- org.apache.flink:force-shading:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-table-planner-blink_2.11:jar:flink-1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-scala_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.scala-lang:scala-reflect:jar:2.11.12:compile
> [INFO] |  |  \- org.scala-lang:scala-compiler:jar:2.11.12:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-java-bridge_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.apache.flink:flink-java:jar:flink-1.9.1:compile
> [INFO] |  |  \-
> org.apache.flink:flink-streaming-java_2.11:jar:1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-scala-bridge_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-scala_2.11:jar:flink-1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-runtime-blink_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.codehaus.janino:janino:jar:3.0.9:compile
> [INFO] |  |  \- org.apache.calcite.avatica:avatica-core:jar:1.15.0:compile
> [INFO] |  \- org.reflections:reflections:jar:0.9.10:compile
> [INFO] +- org.apache.flink:flink-table-planner_2.11:jar:flink-1.9.1:compile
> [INFO] +- org.apache.commons:commons-lang3:jar:3.9:compile
> [INFO] +- com.typesafe.akka:akka-actor_2.11:jar:2.5.21:compile
> [INFO] |  +- org.scala-lang:scala-library:jar:2.11.8:compile
> [INFO] |  +- com.typesafe:config:jar:1.3.3:compile
> [INFO] |  \-
> org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
> [INFO] +- org.apache.flink:flink-sql-client_2.11:jar:1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-clients_2.11:jar:1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-optimizer_2.11:jar:1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-streaming-scala_2.11:jar:1.9.1:compile
> [INFO] |  +- log4j:log4j:jar:1.2.17:compile
> [INFO] |  \- org.apache.flink:flink-shaded-jackson:jar:2.9.8-7.0:compile
> [INFO] +- org.apache.flink:flink-json:jar:1.9.1:compile
> [INFO] +- org.apache.flink:flink-csv:jar:1.9.1:compile
> [INFO] +- org.apache.flink:flink-hbase_2.11:jar:1.9.1:compile
> [INFO] +- org.apache.hbase:hbase-server:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-protobuf:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-netty:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.2.1:compile
> [INFO] |  |  \-
> com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
> [INFO] |  +- org.apache.hbase:hbase-common:jar:2.2.1:compile
> [INFO] |  |  \-
> com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
> [INFO] |  +- org.apache.hbase:hbase-http:jar:2.2.1:compile
> [INFO] |  |  +- org.eclipse.jetty:jetty-util:jar:9.3.27.v20190418:compile
> [INFO] |  |  +-
> org.eclipse.jetty:jetty-util-ajax:jar:9.3.27.v20190418:compile
> [INFO] |  |  +- org.eclipse.jetty:jetty-http:jar:9.3.27.v20190418:compile
> [INFO] |  |  +-
> org.eclipse.jetty:jetty-security:jar:9.3.27.v20190418:compile
> [INFO] |  |  +- org.glassfish.jersey.core:jersey-server:jar:2.25.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile
> [INFO] |  |  |  |  +-
> org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile
> [INFO] |  |  |  |  \-
> org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.core:jersey-client:jar:2.25.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.media:jersey-media-jaxb:jar:2.25.1:compile
> [INFO] |  |  |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
> [INFO] |  |  |  +- org.glassfish.hk2:hk2-api:jar:2.5.0-b32:compile
> [INFO] |  |  |  |  +- org.glassfish.hk2:hk2-utils:jar:2.5.0-b32:compile
> [INFO] |  |  |  |  \-
> org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32:compile
> [INFO] |  |  |  +-
> org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile
> [INFO] |  |  |  \- org.glassfish.hk2:hk2-locator:jar:2.5.0-b32:compile
> [INFO] |  |  +-
>
>
> org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile
> [INFO] |  |  \- javax.ws.rs:javax.ws.rs-api:jar:2.0.1:compile
> [INFO] |  +- org.apache.hbase:hbase-protocol:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-protocol-shaded:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-procedure:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-client:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-zookeeper:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-replication:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-metrics-api:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-metrics:jar:2.2.1:compile
> [INFO] |  +- commons-codec:commons-codec:jar:1.10:compile
> [INFO] |  +- org.apache.hbase:hbase-hadoop-compat:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-hadoop2-compat:jar:2.2.1:compile
> [INFO] |  +- org.eclipse.jetty:jetty-server:jar:9.3.27.v20190418:compile
> [INFO] |  |  \- org.eclipse.jetty:jetty-io:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.eclipse.jetty:jetty-servlet:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.eclipse.jetty:jetty-webapp:jar:9.3.27.v20190418:compile
> [INFO] |  |  \- org.eclipse.jetty:jetty-xml:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.glassfish.web:javax.servlet.jsp:jar:2.3.2:compile
> [INFO] |  |  \- org.glassfish:javax.el:jar:3.0.1-b11:compile (version
> selected from constraint [3.0.0,))
> [INFO] |  +- javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1:compile
> [INFO] |  +- io.dropwizard.metrics:metrics-core:jar:3.2.6:compile
> [INFO] |  +- commons-io:commons-io:jar:2.5:compile
> [INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:compile
> [INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
> [INFO] |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile
> [INFO] |  +- org.apache.htrace:htrace-core4:jar:4.2.0-incubating:compile
> [INFO] |  +- com.lmax:disruptor:jar:3.3.6:compile
> [INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
> [INFO] |  +- org.apache.commons:commons-crypto:jar:1.0.0:compile
> [INFO] |  +- org.apache.hadoop:hadoop-distcp:jar:2.8.5:compile
> [INFO] |  \- org.apache.yetus:audience-annotations:jar:0.5.0:compile
> [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
> [INFO] +- mysql:mysql-connector-java:jar:8.0.18:compile
> [INFO] +- org.apache.flink:flink-connector-hive_2.11:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-hadoop-compatibility_2.11:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-shaded-hadoop-2-uber:jar:2.7.5-7.0:provided
> [INFO] +- org.apache.hive:hive-exec:jar:2.1.1:compile
> [INFO] |  +- org.apache.hive:hive-ant:jar:2.1.1:compile
> [INFO] |  |  \- org.apache.velocity:velocity:jar:1.5:compile
> [INFO] |  |     \- oro:oro:jar:2.0.8:compile
> [INFO] |  +- org.apache.hive:hive-llap-tez:jar:2.1.1:compile
> [INFO] |  |  +- org.apache.hive:hive-common:jar:2.1.1:compile
> [INFO] |  |  |  +- org.apache.hive:hive-storage-api:jar:2.1.1:compile
> [INFO] |  |  |  +- org.apache.hive:hive-orc:jar:2.1.1:compile
> [INFO] |  |  |  |  \- org.iq80.snappy:snappy:jar:0.2:compile
> [INFO] |  |  |  +-
> org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-jta_1.1_spec:jar:1.1.1:compile
> [INFO] |  |  |  |  +- javax.mail:mail:jar:1.4.1:compile
> [INFO] |  |  |  |  +- javax.activation:activation:jar:1.1:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-jaspic_1.0_spec:jar:1.0:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-annotation_1.0_spec:jar:1.1.1:compile
> [INFO] |  |  |  |  \- asm:asm-commons:jar:3.1:compile
> [INFO] |  |  |  |     \- asm:asm-tree:jar:3.1:compile
> [INFO] |  |  |  |        \- asm:asm:jar:3.1:compile
> [INFO] |  |  |  +-
> org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
> [INFO] |  |  |  +- joda-time:joda-time:jar:2.8.1:compile
> [INFO] |  |  |  +- org.json:json:jar:20160810:compile
> [INFO] |  |  |  +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
> [INFO] |  |  |  +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
> [INFO] |  |  |  \-
>
>
> com.github.joshelser:dropwizard-metrics-hadoop-metrics2-reporter:jar:0.1.2:compile
> [INFO] |  |  \- org.apache.hive:hive-llap-client:jar:2.1.1:compile
> [INFO] |  |     \- org.apache.hive:hive-llap-common:jar:2.1.1:compile
> [INFO] |  |        \- org.apache.hive:hive-serde:jar:2.1.1:compile
> [INFO] |  |           +- org.apache.hive:hive-service-rpc:jar:2.1.1:compile
> [INFO] |  |           |  +- tomcat:jasper-compiler:jar:5.5.23:compile
> [INFO] |  |           |  |  +- javax.servlet:jsp-api:jar:2.0:compile
> [INFO] |  |           |  |  \- ant:ant:jar:1.6.5:compile
> [INFO] |  |           |  +- tomcat:jasper-runtime:jar:5.5.23:compile
> [INFO] |  |           |  |  +- javax.servlet:servlet-api:jar:2.4:compile
> [INFO] |  |           |  |  \- commons-el:commons-el:jar:1.0:compile
> [INFO] |  |           |  \- org.apache.thrift:libfb303:jar:0.9.3:compile
> [INFO] |  |           +- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] |  |           |  \-
> com.thoughtworks.paranamer:paranamer:jar:2.3:compile
> [INFO] |  |           +- net.sf.opencsv:opencsv:jar:2.3:compile
> [INFO] |  |           \-
> org.apache.parquet:parquet-hadoop-bundle:jar:1.8.1:compile
> [INFO] |  +- org.apache.hive:hive-shims:jar:2.1.1:compile
> [INFO] |  |  +- org.apache.hive.shims:hive-shims-common:jar:2.1.1:compile
> [INFO] |  |  |  \- org.apache.thrift:libthrift:jar:0.9.3:compile
> [INFO] |  |  +- org.apache.hive.shims:hive-shims-0.23:jar:2.1.1:runtime
> [INFO] |  |  |  \-
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.6.1:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-annotations:jar:2.6.1:runtime
> [INFO] |  |  |     +-
> com.google.inject.extensions:guice-servlet:jar:3.0:runtime
> [INFO] |  |  |     +- com.google.inject:guice:jar:3.0:runtime
> [INFO] |  |  |     |  +- javax.inject:javax.inject:jar:1:runtime
> [INFO] |  |  |     |  \- aopalliance:aopalliance:jar:1.0:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-json:jar:1.9:runtime
> [INFO] |  |  |     |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:runtime
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-core-asl:jar:1.8.3:compile
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-mapper-asl:jar:1.8.3:compile
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:runtime
> [INFO] |  |  |     |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:runtime
> [INFO] |  |  |     +- com.sun.jersey.contribs:jersey-guice:jar:1.9:runtime
> [INFO] |  |  |     |  \- com.sun.jersey:jersey-server:jar:1.9:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-yarn-common:jar:2.6.1:runtime
> [INFO] |  |  |     +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.1:runtime
> [INFO] |  |  |     +- javax.xml.bind:jaxb-api:jar:2.2.2:runtime
> [INFO] |  |  |     |  \- javax.xml.stream:stax-api:jar:1.0-2:runtime
> [INFO] |  |  |     +- org.codehaus.jettison:jettison:jar:1.1:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-core:jar:1.9:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-client:jar:1.9:runtime
> [INFO] |  |  |     +- org.mortbay.jetty:jetty-util:jar:6.1.26:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.1:runtime
> [INFO] |  |  |     |  \-
> org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:runtime
> [INFO] |  |  |     +-
>
>
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:2.6.1:runtime
> [INFO] |  |  |     \-
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:2.6.1:runtime
> [INFO] |  |  |        \- org.mortbay.jetty:jetty:jar:6.1.26:runtime
> [INFO] |  |  \-
> org.apache.hive.shims:hive-shims-scheduler:jar:2.1.1:runtime
> [INFO] |  +- commons-httpclient:commons-httpclient:jar:3.0.1:compile
> [INFO] |  +- org.antlr:antlr-runtime:jar:3.4:compile
> [INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:compile
> [INFO] |  |  \- antlr:antlr:jar:2.7.7:compile
> [INFO] |  +- org.antlr:ST4:jar:4.0.4:compile
> [INFO] |  +- org.apache.ant:ant:jar:1.9.1:compile
> [INFO] |  |  \- org.apache.ant:ant-launcher:jar:1.9.1:compile
> [INFO] |  +- org.apache.commons:commons-compress:jar:1.10:compile
> [INFO] |  +- org.apache.ivy:ivy:jar:2.4.0:compile
> [INFO] |  +- org.apache.curator:curator-framework:jar:2.6.0:compile
> [INFO] |  |  \- org.apache.curator:curator-client:jar:2.6.0:compile
> [INFO] |  +- org.apache.curator:apache-curator:pom:2.6.0:compile
> [INFO] |  +- org.codehaus.groovy:groovy-all:jar:2.4.4:compile
> [INFO] |  +- org.apache.calcite:calcite-core:jar:1.6.0:compile
> [INFO] |  |  +- org.apache.calcite:calcite-linq4j:jar:1.6.0:compile
> [INFO] |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile
> [INFO] |  |  |  \- commons-pool:commons-pool:jar:1.5.4:compile
> [INFO] |  |  +- net.hydromatic:aggdesigner-algorithm:jar:6.0:compile
> [INFO] |  |  \- org.codehaus.janino:commons-compiler:jar:2.7.6:compile
> [INFO] |  +- org.apache.calcite:calcite-avatica:jar:1.6.0:compile
> [INFO] |  +- stax:stax-api:jar:1.0.1:compile
> [INFO] |  \- jline:jline:jar:2.12:compile
> [INFO] +- org.datanucleus:datanucleus-core:jar:4.1.6:compile
> [INFO] +- org.datanucleus:datanucleus-api-jdo:jar:4.2.4:compile
> [INFO] +- org.datanucleus:javax.jdo:jar:3.2.0-m3:compile
> [INFO] |  \- javax.transaction:transaction-api:jar:1.1:compile
> [INFO] +- org.datanucleus:datanucleus-rdbms:jar:4.1.9:compile
> [INFO] +- hadoop-lzo:hadoop-lzo:jar:0.4.14:compile
> [INFO] \- org.apache.flink:flink-runtime-web_2.11:jar:1.9.1:provided
> [INFO]    +- org.apache.flink:flink-runtime_2.11:jar:1.9.1:compile
> [INFO]    |  +-
> org.apache.flink:flink-queryable-state-client-java:jar:1.9.1:compile
> [INFO]    |  +- org.apache.flink:flink-hadoop-fs:jar:1.9.1:compile
> [INFO]    |  +- org.apache.flink:flink-shaded-asm-6:jar:6.2.1-7.0:compile
> [INFO]    |  +- com.typesafe.akka:akka-stream_2.11:jar:2.5.21:compile
> [INFO]    |  |  +- org.reactivestreams:reactive-streams:jar:1.0.2:compile
> [INFO]    |  |  \- com.typesafe:ssl-config-core_2.11:jar:0.3.7:compile
> [INFO]    |  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21:compile
> [INFO]    |  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.11:compile
> [INFO]    |  +- org.clapper:grizzled-slf4j_2.11:jar:1.3.2:compile
> [INFO]    |  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
> [INFO]    |  +- org.xerial.snappy:snappy-java:jar:1.1.4:compile
> [INFO]    |  \- com.twitter:chill_2.11:jar:0.7.6:compile
> [INFO]    |     \- com.twitter:chill-java:jar:0.7.6:compile
> [INFO]    +-
> org.apache.flink:flink-shaded-netty:jar:4.1.32.Final-7.0:compile
> [INFO]    +- org.apache.flink:flink-shaded-guava:jar:18.0-7.0:compile
> [INFO]    \- org.javassist:javassist:jar:3.19.0-GA:compile
> [INFO] ————————————————————————————————————
>
> 日志:
>
> 2020-02-28 17:17:07,890 INFO
> org.apache.hadoop.security.UserGroupInformation               - Login
> successful for user ***/dev@***.COM using keytab file /home/***/key.keytab
> 上面这条是flink日志打印的,从这条日志可以看出 kerberos认证是通过的,能够正常登录,但还是报了以下异常:
> 2020-02-28 17:17:08,658 INFO  org.apache.hadoop.hive.metastore.ObjectStore
> - Setting MetaStore object pin classes with
> hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
> 2020-02-28 17:17:09,280 INFO
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql           - Using
> direct SQL, underlying DB is MYSQL
> 2020-02-28 17:17:09,283 INFO  org.apache.hadoop.hive.metastore.ObjectStore
> - Initialized ObjectStore
> 2020-02-28 17:17:09,450 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
> admin role in metastore
> 2020-02-28 17:17:09,452 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
> public role in metastore
> 2020-02-28 17:17:09,474 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - No user is
> added in admin role, since config is empty
> 2020-02-28 17:17:09,634 INFO
> org.apache.flink.table.catalog.hive.HiveCatalog               - Connected
> to Hive metastore
> 2020-02-28 17:17:09,635 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - 0:
> get_database: ***
> 2020-02-28 17:17:09,637 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit          - ugi=***
> ip=unknown-ip-addr cmd=get_database: ***
> 2020-02-28 17:17:09,658 INFO  org.apache.hadoop.hive.ql.metadata.HiveUtils
> - Adding metastore authorization provider:
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
> 2020-02-28 17:17:10,166 WARN
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory       - The
> short-circuit local reads feature cannot be used because libhadoop cannot
> be loaded.
> 2020-02-28 17:17:10,391 WARN  org.apache.hadoop.ipc.Client
> - Exception encountered while connecting to the server :
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 2020-02-28 17:17:10,397 WARN  org.apache.hadoop.ipc.Client
> - Exception encountered while connecting to the server :
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 2020-02-28 17:17:10,398 INFO
> org.apache.hadoop.io.retry.RetryInvocationHandler             - Exception
> while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over
> ******.org/***.***.***.***:8020 after 1 fail over attempts. Trying to fail
> over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "***.***.***.org/***.***.***.***"; destination host is: "******.org":8020;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1480)
> at org.apache.hadoop.ipc.Client.call(Client.java:1413)
> at
>
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy41.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
> getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at
>
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy42.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
> at
>
>
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at
>
>
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at
>
>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
>
>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at
>
>
> org.apache.hadoop.hive.common.FileUtils.getFileStatusOrNull(FileUtils.java:770)
> at
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:368)
> at
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:343)
> at
>
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:152)
> at
>
>
> org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadDatabase(AuthorizationPreEventListener.java:204)
> at
>
>
> org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:152)
> at
>
>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2153)
> at
>
>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:932)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
> at
>
>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
> at com.sun.proxy.$Proxy35.get_database(Unknown Source)
> at
>
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1280)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:150)
> at com.sun.proxy.$Proxy36.getDatabase(Unknown Source)
> at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.
> getDatabase(HiveMetastoreClientWrapper.java:102)
> at
>
>
> org.apache.flink.table.catalog.hive.HiveCatalog.databaseExists(HiveCatalog.java:347)
> at
> org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:244)
> at
>
>
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
> at
>
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
>
>
>
> ……在这段省略的代码里做了UserGroupInformation.loginUserFromKeytab(principal,keytab);并成功通过认证
> at this is my code.main(MyMainClass.java:24)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
> at
>
>
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
> at
>
>
> org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
> at
>
>
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
> at
>
>
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
> at
>
>
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
> at
>
>
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
>
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> ... 67 more
> Caused by: org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> at
>
>
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at
>
>
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at
>
>
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
> ... 70 more
> 目前诊断看起来像是jar被污染导致。麻烦请指点一二。谢谢!
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <
>
> https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D
>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年02月28日 15:16,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> Hi 叶贤勋,
>
>
>
>
> 我手头上没有kerberos的环境,从TokenCache的代码(2.7.5版本)看起来,这个异常可能是因为没有正确拿到RM的地址或者principal。请检查一下下面这几个配置:
> mapreduce.framework.name
> yarn.resourcemanager.address
> yarn.resourcemanager.principal
> 以及你的flink的作业是否能读到这些配置
>
> On Fri, Feb 28, 2020 at 11:10 AM Kurt Young <yk...@gmail.com> wrote:
>
> cc @lirui@apache.org <li...@apache.org>
>
> Best,
> Kurt
>
>
> On Thu, Feb 13, 2020 at 10:22 AM 叶贤勋 <yx...@163.com> wrote:
>
> Hi 大家好:
> 在做hive2.1.1 source带Kerberos认证有个异常请教下大家。
> flink 版本1.9
> hive 版本2.1.1,实现了HiveShimV211。
> 代码:
> public class HiveCatalogTest {
> private static final Logger LOG =
> LoggerFactory.getLogger(HiveCatalogTest.class);
> private String hiveConfDir = "/Users/yexianxun/dev/env/test-hive"; //
> a local path
> private TableEnvironment tableEnv;
> private HiveCatalog hive;
> private String hiveName;
> private String hiveDB;
> private String version;
>
>
> @Before
> public void before() {
> EnvironmentSettings settings =
> EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inBatchMode()
> .build();
> tableEnv = TableEnvironment.create(settings);
> hiveName = "myhive";
> hiveDB = "sloth";
> version = "2.1.1";
> }
>
>
> @Test
> public void testCatalogQuerySink() throws Exception {
> hive = new HiveCatalog(hiveName, hiveDB, hiveConfDir, version);
> System.setProperty("java.security.krb5.conf", hiveConfDir +
> "/krb5.conf");
> tableEnv.getConfig().getConfiguration().setString("stream_mode",
> "false");
> tableEnv.registerCatalog(hiveName, hive);
> tableEnv.useCatalog(hiveName);
> String query = "select * from " + hiveName + "." + hiveDB +
> ".testtbl2 where id = 20200202";
> Table table = tableEnv.sqlQuery(query);
> String newTableName = "testtbl2_1";
> table.insertInto(hiveName, hiveDB, newTableName);
> tableEnv.execute("test");
> }
> }
>
>
> HiveMetastoreClientFactory:
> public static HiveMetastoreClientWrapper create(HiveConf hiveConf,
> String hiveVersion) {
> Preconditions.checkNotNull(hiveVersion, "Hive version cannot be
> null");
> if (System.getProperty("java.security.krb5.conf") != null) {
> if (System.getProperty("had_set_kerberos") == null) {
> String principal = "sloth/dev@BDMS.163.COM";
> String keytab =
> "/Users/yexianxun/dev/env/mammut-test-hive/sloth.keytab";
> try {
> sun.security.krb5.Config.refresh();
> UserGroupInformation.setConfiguration(hiveConf);
> UserGroupInformation.loginUserFromKeytab(principal,
> keytab);
> System.setProperty("had_set_kerberos", "true");
> } catch (Exception e) {
> LOG.error("", e);
> }
> }
> }
> return new HiveMetastoreClientWrapper(hiveConf, hiveVersion);
> }
>
>
> HiveCatalog:
> private static HiveConf createHiveConf(@Nullable String hiveConfDir) {
> LOG.info("Setting hive conf dir as {}", hiveConfDir);
> try {
> HiveConf.setHiveSiteLocation(
> hiveConfDir == null ?
> null : Paths.get(hiveConfDir,
> "hive-site.xml").toUri().toURL());
> } catch (MalformedURLException e) {
> throw new CatalogException(
> String.format("Failed to get hive-site.xml from %s",
> hiveConfDir), e);
> }
>
>
> // create HiveConf from hadoop configuration
> HiveConf hiveConf = new
> HiveConf(HadoopUtils.getHadoopConfiguration(new
> org.apache.flink.configuration.Configuration()),
> HiveConf.class);
> try {
> hiveConf.addResource(Paths.get(hiveConfDir,
> "hdfs-site.xml").toUri().toURL());
> hiveConf.addResource(Paths.get(hiveConfDir,
> "core-site.xml").toUri().toURL());
> } catch (MalformedURLException e) {
> throw new CatalogException(String.format("Failed to get
> hdfs|core-site.xml from %s", hiveConfDir), e);
> }
> return hiveConf;
> }
>
>
> 在执行testCatalogQuerySink方法报以下错误:
> org.apache.flink.runtime.client.JobExecutionException: Could not retrieve
> JobResult.
>
>
> at
>
>
>
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:622)
> at
>
>
>
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
> at
>
>
>
> org.apache.flink.table.planner.delegation.BatchExecutor.execute(BatchExecutor.java:55)
> at
>
>
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:410)
> at api.HiveCatalogTest.testCatalogQuerySink(HiveCatalogMumTest.java:234)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
>
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at
>
>
>
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
>
>
>
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at
>
>
>
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at
>
>
>
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at
>
>
>
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at
>
>
>
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at
>
>
>
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at
>
>
>
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at
>
>
>
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed
> to submit job.
> at
>
>
>
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
> at
>
>
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> at
>
>
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> at
>
>
>
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at
>
>
>
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
>
>
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
>
>
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.RuntimeException:
> org.apache.flink.runtime.client.JobExecutionException: Could not set up
> JobManager
> at
>
>
>
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> at
>
>
>
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> ... 6 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
> not set up JobManager
> at
>
>
>
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
> at
>
>
>
> org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
> at
>
>
>
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
> at
>
>
>
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> ... 7 more
> Caused by: org.apache.flink.runtime.JobException: Creating the input
> splits caused an error: Can't get Master Kerberos principal for use as
> renewer
> at
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:270)
> at
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:907)
> at
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:230)
> at
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
> at
>
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
> at
>
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
> at
>
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
> at
>
>
>
> org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
> at
>
>
>
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
> at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
> at
>
>
>
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> at
>
>
>
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> at
>
>
>
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
> ... 10 more
> Caused by: java.io.IOException: Can't get Master Kerberos principal for
> use as renewer
> at
>
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
> at
>
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at
>
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at
>
>
>
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
> at
>
>
>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
> at
>
>
>
> org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:159)
> at
>
>
>
> org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:63)
> at
>
>
>
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:256)
> ... 22 more
>
>
> 测试sink的方法是能够正常插入数据,但是在hive source时报这个错误,感觉是获取deleg
> token时返回空导致的。不知道具体应该怎么解决
>
>
>
>
>
> | |
> 叶贤勋
> |
> |
> yxx_cmhd@163.com
> |
> 签名由网易邮箱大师定制
>
>
>
>
>

回复: Hive Source With Kerberos认证问题

Posted by 叶贤勋 <yx...@163.com>.
你好,
        datanucleus jar的包的问题已经解决,之前应该是没有通过hive.metastore.uris进行连接访问HMS。
        我在HiveCatalog的open方法里面做了Kerberos登录, UserGroupInformation.loginUserFromKeytab(principal, keytabPath);
        并且已经登录成功。按理说Kerberos登录成功后在这个进程就应该有权限访问metastore了吧。但是在创建megastore client时报了以下错误。
       
2020-03-04 20:23:17,191 DEBUG org.apache.flink.table.catalog.hive.HiveCatalog               - Hive MetaStore Uris is thrift://***1:9083,thrift://***2:9083.
2020-03-04 20:23:17,192 INFO  org.apache.flink.table.catalog.hive.HiveCatalog               - Created HiveCatalog 'myhive'
2020-03-04 20:23:17,360 INFO  org.apache.hadoop.security.UserGroupInformation               - Login successful for user ***/dev@***.COM using keytab file /Users/yexianxun/IdeaProjects/flink-1.9.0/build-target/examples/hive/kerberos/key.keytab
2020-03-04 20:23:17,360 DEBUG org.apache.flink.table.catalog.hive.HiveCatalog               - login user by kerberos, principal is ***/dev@***.CO, login is true
2020-03-04 20:23:17,374 INFO  org.apache.curator.framework.imps.CuratorFrameworkImpl        - Starting
2020-03-04 20:23:17,374 DEBUG org.apache.curator.CuratorZookeeperClient                     - Starting
2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState                            - Starting
2020-03-04 20:23:17,374 DEBUG org.apache.curator.ConnectionState                            - reset
2020-03-04 20:23:17,374 INFO  org.apache.zookeeper.ZooKeeper                                - Initiating client connection, connectString=***1:2181,***2:2181,***3:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@6b52dd31
2020-03-04 20:23:17,379 DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient               - JAAS loginContext is: HiveZooKeeperClient
2020-03-04 20:23:17,381 WARN  org.apache.zookeeper.ClientCnxn                               - SASL configuration failed: javax.security.auth.login.LoginException: Unable to obtain password from user
 Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
2020-03-04 20:23:17,381 INFO  org.apache.zookeeper.ClientCnxn                               - Opening socket connection to server ***1:2181
2020-03-04 20:23:17,381 ERROR org.apache.curator.ConnectionState                            - Authentication failed
2020-03-04 20:23:17,384 INFO  org.apache.zookeeper.ClientCnxn                               - Socket connection established to ***1:2181, initiating session
2020-03-04 20:23:17,384 DEBUG org.apache.zookeeper.ClientCnxn                               - Session establishment request sent on ***1:2181
2020-03-04 20:23:17,393 INFO  org.apache.zookeeper.ClientCnxn                               - Session establishment complete on server ***1:2181, sessionid = 0x16f7af0645c25a8, negotiated timeout = 40000
2020-03-04 20:23:17,393 INFO  org.apache.curator.framework.state.ConnectionStateManager     - State change: CONNECTED
2020-03-04 20:23:17,397 DEBUG org.apache.zookeeper.ClientCnxn                               - Reading reply sessionid:0x16f7af0645c25a8, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,292064345364,0  request:: '/hive_base,F  response:: s{17179869635,17179869635,1527576303010,1527576303010,0,3,0,0,0,1,249117832596} 
2020-03-04 20:23:17,400 DEBUG org.apache.zookeeper.ClientCnxn                               - Reading reply sessionid:0x16f7af0645c25a8, packet:: clientPath:null serverPath:null finished:false header:: 2,12  replyHeader:: 2,292064345364,0  request:: '/hive_base/namespaces/hive/uris,F  response:: v{'dGhyaWZ0Oi8vaHphZGctYmRtcy03LnNlcnZlci4xNjMub3JnOjkwODM=,'dGhyaWZ0Oi8vaHphZGctYmRtcy04LnNlcnZlci4xNjMub3JnOjkwODM=},s{17179869664,17179869664,1527576306106,1527576306106,0,1106,0,0,0,2,292063632993} 
2020-03-04 20:23:17,401 INFO  hive.metastore                                                - atlasProxy is set to 
2020-03-04 20:23:17,401 INFO  hive.metastore                                                - Trying to connect to metastore with URI thrift://hzadg-bdms-7.server.163.org:9083
2020-03-04 20:23:17,408 INFO  hive.metastore                                                - tokenStrForm should not be null for querynull
2020-03-04 20:23:17,432 DEBUG org.apache.thrift.transport.TSaslTransport                    - opening transport org.apache.thrift.transport.TSaslClientTransport@3c69362a
2020-03-04 20:23:17,441 ERROR org.apache.thrift.transport.TSaslTransport                    - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
  at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
  at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
  at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
  at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
  at org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
  at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
  at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
  at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
  at org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
  at org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
  at org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
  ......业务处理逻辑......
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
  at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
  at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
  at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
  at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
  at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
  at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
  at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
  at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
  at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
  at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
  at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
  at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
  at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
  ... 42 more
2020-03-04 20:23:17,443 DEBUG org.apache.thrift.transport.TSaslTransport                    - CLIENT: Writing message with status BAD and payload length 19
2020-03-04 20:23:17,445 WARN  hive.metastore                                                - Failed to connect to the MetaStore Server...
org.apache.thrift.transport.TTransportException: GSS initiate failed
  at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
  at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
  at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
  at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:562)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:351)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:213)
  at org.apache.flink.table.catalog.hive.client.HiveShimV211.getHiveMetastoreClient(HiveShimV211.java:68)
  at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:225)
  at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.<init>(HiveMetastoreClientWrapper.java:66)
  at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
  at org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:266)
  at org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
  at org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
  ......业务处理逻辑......
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
  at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
  at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
  at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
  at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
  at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:422)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
  at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)




| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制


在2020年03月3日 19:04,Rui Li<li...@apache.org> 写道:
datanucleus是在HMS端使用的,如果没有datanucleus会报错的话说明你的代码在尝试创建embedded
metastore。这是预期的行为么?我理解你们应该是有一个远端的HMS,然后希望HiveCatalog去连接这个HMS吧?

On Tue, Mar 3, 2020 at 4:00 PM 叶贤勋 <yx...@163.com> wrote:

hive conf应该是对的,前面UserGroupInfomation登录时都是成功的。
datanucleus的依赖不加的话,会报claas not found等异常。
1、java.lang.ClassNotFoundException:
org.datanucleus.api.jdo.JDOPersistenceManagerFactory
2、Caused by: org.datanucleus.exceptions.NucleusUserException: There is no
available StoreManager of type "rdbms". Please make sure you have specified
"datanucleus.storeManagerType" correctly and that all relevant plugins are
in the CLASSPATH


叶贤勋
yxx_cmhd@163.com

<https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D>
签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制

在2020年03月2日 11:50,Rui Li<li...@apache.org> <li...@apache.org> 写道:

从你贴的log来看似乎是创建了embedded metastore。可以检查一下HiveCatalog是不是读到了不正确的hive
conf?另外你贴的maven的这些依赖都打到你flink作业的jar里了么?像datanucleus的依赖应该是不需要的。

On Sat, Feb 29, 2020 at 10:42 PM 叶贤勋 <yx...@163.com> wrote:

Hi 李锐,感谢你的回复。
前面的问题通过设置yarn.resourcemanager.principal,已经解决。
但是现在出现另外一个问题,请帮忙看看。


背景:flink任务还是source&sink带有kerberos的hive,相同代码在本地进行测试是能通过kerberos认证,并且能够查询和插入数据到hive。但是任务提交到集群就报kerberos认证失败的错误。
Flink:1.9.1, flink-1.9.1/lib/有flink-dist_2.11-1.9.1.jar,
flink-shaded-hadoop-2-uber-2.7.5-7.0.jar,log4j-1.2.17.jar,
slf4j-log4j12-1.7.15.jar
Hive:2.1.1
flink任务主要依赖的jar:
[INFO] +- org.apache.flink:flink-table-api-java:jar:flink-1.9.1:compile
[INFO] |  +- org.apache.flink:flink-table-common:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-core:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-annotations:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-metrics-core:jar:flink-1.9.1:compile
[INFO] |  |     \- com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
[INFO] |  |        +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
[INFO] |  |        \- org.objenesis:objenesis:jar:2.1:compile
[INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
[INFO] |  \- org.apache.flink:force-shading:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-table-planner-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.scala-lang:scala-reflect:jar:2.11.12:compile
[INFO] |  |  \- org.scala-lang:scala-compiler:jar:2.11.12:compile
[INFO] |  +-
org.apache.flink:flink-table-api-java-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.apache.flink:flink-java:jar:flink-1.9.1:compile
[INFO] |  |  \-
org.apache.flink:flink-streaming-java_2.11:jar:1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-runtime-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.codehaus.janino:janino:jar:3.0.9:compile
[INFO] |  |  \- org.apache.calcite.avatica:avatica-core:jar:1.15.0:compile
[INFO] |  \- org.reflections:reflections:jar:0.9.10:compile
[INFO] +- org.apache.flink:flink-table-planner_2.11:jar:flink-1.9.1:compile
[INFO] +- org.apache.commons:commons-lang3:jar:3.9:compile
[INFO] +- com.typesafe.akka:akka-actor_2.11:jar:2.5.21:compile
[INFO] |  +- org.scala-lang:scala-library:jar:2.11.8:compile
[INFO] |  +- com.typesafe:config:jar:1.3.3:compile
[INFO] |  \-
org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
[INFO] +- org.apache.flink:flink-sql-client_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-clients_2.11:jar:1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-optimizer_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-streaming-scala_2.11:jar:1.9.1:compile
[INFO] |  +- log4j:log4j:jar:1.2.17:compile
[INFO] |  \- org.apache.flink:flink-shaded-jackson:jar:2.9.8-7.0:compile
[INFO] +- org.apache.flink:flink-json:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-csv:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-hbase_2.11:jar:1.9.1:compile
[INFO] +- org.apache.hbase:hbase-server:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-protobuf:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-netty:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.2.1:compile
[INFO] |  |  \-
com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
[INFO] |  +- org.apache.hbase:hbase-common:jar:2.2.1:compile
[INFO] |  |  \-
com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
[INFO] |  +- org.apache.hbase:hbase-http:jar:2.2.1:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-util:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-util-ajax:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-http:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-security:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.glassfish.jersey.core:jersey-server:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile
[INFO] |  |  |  |  +-
org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-client:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.media:jersey-media-jaxb:jar:2.25.1:compile
[INFO] |  |  |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] |  |  |  +- org.glassfish.hk2:hk2-api:jar:2.5.0-b32:compile
[INFO] |  |  |  |  +- org.glassfish.hk2:hk2-utils:jar:2.5.0-b32:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32:compile
[INFO] |  |  |  +-
org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile
[INFO] |  |  |  \- org.glassfish.hk2:hk2-locator:jar:2.5.0-b32:compile
[INFO] |  |  +-

org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile
[INFO] |  |  \- javax.ws.rs:javax.ws.rs-api:jar:2.0.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol-shaded:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-procedure:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-client:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-zookeeper:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-replication:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics-api:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics:jar:2.2.1:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.10:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop-compat:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop2-compat:jar:2.2.1:compile
[INFO] |  +- org.eclipse.jetty:jetty-server:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-io:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-servlet:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-webapp:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-xml:jar:9.3.27.v20190418:compile
[INFO] |  +- org.glassfish.web:javax.servlet.jsp:jar:2.3.2:compile
[INFO] |  |  \- org.glassfish:javax.el:jar:3.0.1-b11:compile (version
selected from constraint [3.0.0,))
[INFO] |  +- javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1:compile
[INFO] |  +- io.dropwizard.metrics:metrics-core:jar:3.2.6:compile
[INFO] |  +- commons-io:commons-io:jar:2.5:compile
[INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:compile
[INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
[INFO] |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile
[INFO] |  +- org.apache.htrace:htrace-core4:jar:4.2.0-incubating:compile
[INFO] |  +- com.lmax:disruptor:jar:3.3.6:compile
[INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
[INFO] |  +- org.apache.commons:commons-crypto:jar:1.0.0:compile
[INFO] |  +- org.apache.hadoop:hadoop-distcp:jar:2.8.5:compile
[INFO] |  \- org.apache.yetus:audience-annotations:jar:0.5.0:compile
[INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
[INFO] +- mysql:mysql-connector-java:jar:8.0.18:compile
[INFO] +- org.apache.flink:flink-connector-hive_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-hadoop-compatibility_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-shaded-hadoop-2-uber:jar:2.7.5-7.0:provided
[INFO] +- org.apache.hive:hive-exec:jar:2.1.1:compile
[INFO] |  +- org.apache.hive:hive-ant:jar:2.1.1:compile
[INFO] |  |  \- org.apache.velocity:velocity:jar:1.5:compile
[INFO] |  |     \- oro:oro:jar:2.0.8:compile
[INFO] |  +- org.apache.hive:hive-llap-tez:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive:hive-common:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-storage-api:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-orc:jar:2.1.1:compile
[INFO] |  |  |  |  \- org.iq80.snappy:snappy:jar:0.2:compile
[INFO] |  |  |  +-
org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jta_1.1_spec:jar:1.1.1:compile
[INFO] |  |  |  |  +- javax.mail:mail:jar:1.4.1:compile
[INFO] |  |  |  |  +- javax.activation:activation:jar:1.1:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jaspic_1.0_spec:jar:1.0:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-annotation_1.0_spec:jar:1.1.1:compile
[INFO] |  |  |  |  \- asm:asm-commons:jar:3.1:compile
[INFO] |  |  |  |     \- asm:asm-tree:jar:3.1:compile
[INFO] |  |  |  |        \- asm:asm:jar:3.1:compile
[INFO] |  |  |  +-
org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
[INFO] |  |  |  +- joda-time:joda-time:jar:2.8.1:compile
[INFO] |  |  |  +- org.json:json:jar:20160810:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
[INFO] |  |  |  \-

com.github.joshelser:dropwizard-metrics-hadoop-metrics2-reporter:jar:0.1.2:compile
[INFO] |  |  \- org.apache.hive:hive-llap-client:jar:2.1.1:compile
[INFO] |  |     \- org.apache.hive:hive-llap-common:jar:2.1.1:compile
[INFO] |  |        \- org.apache.hive:hive-serde:jar:2.1.1:compile
[INFO] |  |           +- org.apache.hive:hive-service-rpc:jar:2.1.1:compile
[INFO] |  |           |  +- tomcat:jasper-compiler:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:jsp-api:jar:2.0:compile
[INFO] |  |           |  |  \- ant:ant:jar:1.6.5:compile
[INFO] |  |           |  +- tomcat:jasper-runtime:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:servlet-api:jar:2.4:compile
[INFO] |  |           |  |  \- commons-el:commons-el:jar:1.0:compile
[INFO] |  |           |  \- org.apache.thrift:libfb303:jar:0.9.3:compile
[INFO] |  |           +- org.apache.avro:avro:jar:1.7.7:compile
[INFO] |  |           |  \-
com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO] |  |           +- net.sf.opencsv:opencsv:jar:2.3:compile
[INFO] |  |           \-
org.apache.parquet:parquet-hadoop-bundle:jar:1.8.1:compile
[INFO] |  +- org.apache.hive:hive-shims:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-common:jar:2.1.1:compile
[INFO] |  |  |  \- org.apache.thrift:libthrift:jar:0.9.3:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-0.23:jar:2.1.1:runtime
[INFO] |  |  |  \-
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.6.1:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-annotations:jar:2.6.1:runtime
[INFO] |  |  |     +-
com.google.inject.extensions:guice-servlet:jar:3.0:runtime
[INFO] |  |  |     +- com.google.inject:guice:jar:3.0:runtime
[INFO] |  |  |     |  +- javax.inject:javax.inject:jar:1:runtime
[INFO] |  |  |     |  \- aopalliance:aopalliance:jar:1.0:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-json:jar:1.9:runtime
[INFO] |  |  |     |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:runtime
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-core-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-mapper-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:runtime
[INFO] |  |  |     |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:runtime
[INFO] |  |  |     +- com.sun.jersey.contribs:jersey-guice:jar:1.9:runtime
[INFO] |  |  |     |  \- com.sun.jersey:jersey-server:jar:1.9:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-common:jar:2.6.1:runtime
[INFO] |  |  |     +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.1:runtime
[INFO] |  |  |     +- javax.xml.bind:jaxb-api:jar:2.2.2:runtime
[INFO] |  |  |     |  \- javax.xml.stream:stax-api:jar:1.0-2:runtime
[INFO] |  |  |     +- org.codehaus.jettison:jettison:jar:1.1:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-core:jar:1.9:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-client:jar:1.9:runtime
[INFO] |  |  |     +- org.mortbay.jetty:jetty-util:jar:6.1.26:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.1:runtime
[INFO] |  |  |     |  \-
org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:runtime
[INFO] |  |  |     +-

org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:2.6.1:runtime
[INFO] |  |  |     \-
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:2.6.1:runtime
[INFO] |  |  |        \- org.mortbay.jetty:jetty:jar:6.1.26:runtime
[INFO] |  |  \-
org.apache.hive.shims:hive-shims-scheduler:jar:2.1.1:runtime
[INFO] |  +- commons-httpclient:commons-httpclient:jar:3.0.1:compile
[INFO] |  +- org.antlr:antlr-runtime:jar:3.4:compile
[INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:compile
[INFO] |  |  \- antlr:antlr:jar:2.7.7:compile
[INFO] |  +- org.antlr:ST4:jar:4.0.4:compile
[INFO] |  +- org.apache.ant:ant:jar:1.9.1:compile
[INFO] |  |  \- org.apache.ant:ant-launcher:jar:1.9.1:compile
[INFO] |  +- org.apache.commons:commons-compress:jar:1.10:compile
[INFO] |  +- org.apache.ivy:ivy:jar:2.4.0:compile
[INFO] |  +- org.apache.curator:curator-framework:jar:2.6.0:compile
[INFO] |  |  \- org.apache.curator:curator-client:jar:2.6.0:compile
[INFO] |  +- org.apache.curator:apache-curator:pom:2.6.0:compile
[INFO] |  +- org.codehaus.groovy:groovy-all:jar:2.4.4:compile
[INFO] |  +- org.apache.calcite:calcite-core:jar:1.6.0:compile
[INFO] |  |  +- org.apache.calcite:calcite-linq4j:jar:1.6.0:compile
[INFO] |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile
[INFO] |  |  |  \- commons-pool:commons-pool:jar:1.5.4:compile
[INFO] |  |  +- net.hydromatic:aggdesigner-algorithm:jar:6.0:compile
[INFO] |  |  \- org.codehaus.janino:commons-compiler:jar:2.7.6:compile
[INFO] |  +- org.apache.calcite:calcite-avatica:jar:1.6.0:compile
[INFO] |  +- stax:stax-api:jar:1.0.1:compile
[INFO] |  \- jline:jline:jar:2.12:compile
[INFO] +- org.datanucleus:datanucleus-core:jar:4.1.6:compile
[INFO] +- org.datanucleus:datanucleus-api-jdo:jar:4.2.4:compile
[INFO] +- org.datanucleus:javax.jdo:jar:3.2.0-m3:compile
[INFO] |  \- javax.transaction:transaction-api:jar:1.1:compile
[INFO] +- org.datanucleus:datanucleus-rdbms:jar:4.1.9:compile
[INFO] +- hadoop-lzo:hadoop-lzo:jar:0.4.14:compile
[INFO] \- org.apache.flink:flink-runtime-web_2.11:jar:1.9.1:provided
[INFO]    +- org.apache.flink:flink-runtime_2.11:jar:1.9.1:compile
[INFO]    |  +-
org.apache.flink:flink-queryable-state-client-java:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-hadoop-fs:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-shaded-asm-6:jar:6.2.1-7.0:compile
[INFO]    |  +- com.typesafe.akka:akka-stream_2.11:jar:2.5.21:compile
[INFO]    |  |  +- org.reactivestreams:reactive-streams:jar:1.0.2:compile
[INFO]    |  |  \- com.typesafe:ssl-config-core_2.11:jar:0.3.7:compile
[INFO]    |  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21:compile
[INFO]    |  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.11:compile
[INFO]    |  +- org.clapper:grizzled-slf4j_2.11:jar:1.3.2:compile
[INFO]    |  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
[INFO]    |  +- org.xerial.snappy:snappy-java:jar:1.1.4:compile
[INFO]    |  \- com.twitter:chill_2.11:jar:0.7.6:compile
[INFO]    |     \- com.twitter:chill-java:jar:0.7.6:compile
[INFO]    +-
org.apache.flink:flink-shaded-netty:jar:4.1.32.Final-7.0:compile
[INFO]    +- org.apache.flink:flink-shaded-guava:jar:18.0-7.0:compile
[INFO]    \- org.javassist:javassist:jar:3.19.0-GA:compile
[INFO] ————————————————————————————————————

日志:

2020-02-28 17:17:07,890 INFO
org.apache.hadoop.security.UserGroupInformation               - Login
successful for user ***/dev@***.COM using keytab file /home/***/key.keytab
上面这条是flink日志打印的,从这条日志可以看出 kerberos认证是通过的,能够正常登录,但还是报了以下异常:
2020-02-28 17:17:08,658 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Setting MetaStore object pin classes with
hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
2020-02-28 17:17:09,280 INFO
org.apache.hadoop.hive.metastore.MetaStoreDirectSql           - Using
direct SQL, underlying DB is MYSQL
2020-02-28 17:17:09,283 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Initialized ObjectStore
2020-02-28 17:17:09,450 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
admin role in metastore
2020-02-28 17:17:09,452 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
public role in metastore
2020-02-28 17:17:09,474 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - No user is
added in admin role, since config is empty
2020-02-28 17:17:09,634 INFO
org.apache.flink.table.catalog.hive.HiveCatalog               - Connected
to Hive metastore
2020-02-28 17:17:09,635 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - 0:
get_database: ***
2020-02-28 17:17:09,637 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore.audit          - ugi=***
ip=unknown-ip-addr cmd=get_database: ***
2020-02-28 17:17:09,658 INFO  org.apache.hadoop.hive.ql.metadata.HiveUtils
- Adding metastore authorization provider:

org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
2020-02-28 17:17:10,166 WARN
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory       - The
short-circuit local reads feature cannot be used because libhadoop cannot
be loaded.
2020-02-28 17:17:10,391 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,397 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,398 INFO
org.apache.hadoop.io.retry.RetryInvocationHandler             - Exception
while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over
******.org/***.***.***.***:8020 after 1 fail over attempts. Trying to fail
over immediately.
java.io.IOException: Failed on local exception: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
"***.***.***.org/***.***.***.***"; destination host is: "******.org":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at

org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy41.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at

org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at

org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy42.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
at

org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at

org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at

org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at

org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at

org.apache.hadoop.hive.common.FileUtils.getFileStatusOrNull(FileUtils.java:770)
at

org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:368)
at

org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:343)
at

org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:152)
at

org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadDatabase(AuthorizationPreEventListener.java:204)
at

org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:152)
at

org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2153)
at

org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:932)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at

org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at

org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy35.get_database(Unknown Source)
at

org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at

org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:150)
at com.sun.proxy.$Proxy36.getDatabase(Unknown Source)
at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.
getDatabase(HiveMetastoreClientWrapper.java:102)
at

org.apache.flink.table.catalog.hive.HiveCatalog.databaseExists(HiveCatalog.java:347)
at
org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:244)
at

org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
at

org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)


……在这段省略的代码里做了UserGroupInformation.loginUserFromKeytab(principal,keytab);并成功通过认证
at this is my code.main(MyMainClass.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at

org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
at

org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at

org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
at

org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
at

org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at

org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
at

org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at

org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at

org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
... 67 more
Caused by: org.apache.hadoop.security.AccessControlException: Client
cannot authenticate via:[TOKEN, KERBEROS]
at

org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
at

org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at

org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
... 70 more
目前诊断看起来像是jar被污染导致。麻烦请指点一二。谢谢!

叶贤勋
yxx_cmhd@163.com

<
https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D

签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制

在2020年02月28日 15:16,Rui Li<li...@apache.org> <li...@apache.org> 写道:

Hi 叶贤勋,



我手头上没有kerberos的环境,从TokenCache的代码(2.7.5版本)看起来,这个异常可能是因为没有正确拿到RM的地址或者principal。请检查一下下面这几个配置:
mapreduce.framework.name
yarn.resourcemanager.address
yarn.resourcemanager.principal
以及你的flink的作业是否能读到这些配置

On Fri, Feb 28, 2020 at 11:10 AM Kurt Young <yk...@gmail.com> wrote:

cc @lirui@apache.org <li...@apache.org>

Best,
Kurt


On Thu, Feb 13, 2020 at 10:22 AM 叶贤勋 <yx...@163.com> wrote:

Hi 大家好:
在做hive2.1.1 source带Kerberos认证有个异常请教下大家。
flink 版本1.9
hive 版本2.1.1,实现了HiveShimV211。
代码:
public class HiveCatalogTest {
private static final Logger LOG =
LoggerFactory.getLogger(HiveCatalogTest.class);
private String hiveConfDir = "/Users/yexianxun/dev/env/test-hive"; //
a local path
private TableEnvironment tableEnv;
private HiveCatalog hive;
private String hiveName;
private String hiveDB;
private String version;


@Before
public void before() {
EnvironmentSettings settings =
EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inBatchMode()
.build();
tableEnv = TableEnvironment.create(settings);
hiveName = "myhive";
hiveDB = "sloth";
version = "2.1.1";
}


@Test
public void testCatalogQuerySink() throws Exception {
hive = new HiveCatalog(hiveName, hiveDB, hiveConfDir, version);
System.setProperty("java.security.krb5.conf", hiveConfDir +
"/krb5.conf");
tableEnv.getConfig().getConfiguration().setString("stream_mode",
"false");
tableEnv.registerCatalog(hiveName, hive);
tableEnv.useCatalog(hiveName);
String query = "select * from " + hiveName + "." + hiveDB +
".testtbl2 where id = 20200202";
Table table = tableEnv.sqlQuery(query);
String newTableName = "testtbl2_1";
table.insertInto(hiveName, hiveDB, newTableName);
tableEnv.execute("test");
}
}


HiveMetastoreClientFactory:
public static HiveMetastoreClientWrapper create(HiveConf hiveConf,
String hiveVersion) {
Preconditions.checkNotNull(hiveVersion, "Hive version cannot be
null");
if (System.getProperty("java.security.krb5.conf") != null) {
if (System.getProperty("had_set_kerberos") == null) {
String principal = "sloth/dev@BDMS.163.COM";
String keytab =
"/Users/yexianxun/dev/env/mammut-test-hive/sloth.keytab";
try {
sun.security.krb5.Config.refresh();
UserGroupInformation.setConfiguration(hiveConf);
UserGroupInformation.loginUserFromKeytab(principal,
keytab);
System.setProperty("had_set_kerberos", "true");
} catch (Exception e) {
LOG.error("", e);
}
}
}
return new HiveMetastoreClientWrapper(hiveConf, hiveVersion);
}


HiveCatalog:
private static HiveConf createHiveConf(@Nullable String hiveConfDir) {
LOG.info("Setting hive conf dir as {}", hiveConfDir);
try {
HiveConf.setHiveSiteLocation(
hiveConfDir == null ?
null : Paths.get(hiveConfDir,
"hive-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(
String.format("Failed to get hive-site.xml from %s",
hiveConfDir), e);
}


// create HiveConf from hadoop configuration
HiveConf hiveConf = new
HiveConf(HadoopUtils.getHadoopConfiguration(new
org.apache.flink.configuration.Configuration()),
HiveConf.class);
try {
hiveConf.addResource(Paths.get(hiveConfDir,
"hdfs-site.xml").toUri().toURL());
hiveConf.addResource(Paths.get(hiveConfDir,
"core-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(String.format("Failed to get
hdfs|core-site.xml from %s", hiveConfDir), e);
}
return hiveConf;
}


在执行testCatalogQuerySink方法报以下错误:
org.apache.flink.runtime.client.JobExecutionException: Could not retrieve
JobResult.


at


org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:622)
at


org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
at


org.apache.flink.table.planner.delegation.BatchExecutor.execute(BatchExecutor.java:55)
at


org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:410)
at api.HiveCatalogTest.testCatalogQuerySink(HiveCatalogMumTest.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at


sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at


sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at


org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at


org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at


org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at


org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at


org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at


org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at


org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at


com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at


com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at


com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed
to submit job.
at


org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
at


java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at


java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at


java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at


akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at


akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at


akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException:
org.apache.flink.runtime.client.JobExecutionException: Could not set up
JobManager
at


org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
at


java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
not set up JobManager
at


org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
at


org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
at


org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
at


org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
... 7 more
Caused by: org.apache.flink.runtime.JobException: Creating the input
splits caused an error: Can't get Master Kerberos principal for use as
renewer
at


org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:270)
at


org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:907)
at


org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:230)
at


org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
at


org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
at


org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
at


org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
at


org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
at


org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
at


org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at


org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at


org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
... 10 more
Caused by: java.io.IOException: Can't get Master Kerberos principal for
use as renewer
at


org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
at


org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at


org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at


org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
at


org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at


org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:159)
at


org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:63)
at


org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:256)
... 22 more


测试sink的方法是能够正常插入数据,但是在hive source时报这个错误,感觉是获取deleg
token时返回空导致的。不知道具体应该怎么解决





| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制





Re: Hive Source With Kerberos认证问题

Posted by Rui Li <li...@apache.org>.
datanucleus是在HMS端使用的,如果没有datanucleus会报错的话说明你的代码在尝试创建embedded
metastore。这是预期的行为么?我理解你们应该是有一个远端的HMS,然后希望HiveCatalog去连接这个HMS吧?

On Tue, Mar 3, 2020 at 4:00 PM 叶贤勋 <yx...@163.com> wrote:

> hive conf应该是对的,前面UserGroupInfomation登录时都是成功的。
> datanucleus的依赖不加的话,会报claas not found等异常。
> 1、java.lang.ClassNotFoundException:
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory
> 2、Caused by: org.datanucleus.exceptions.NucleusUserException: There is no
> available StoreManager of type "rdbms". Please make sure you have specified
> "datanucleus.storeManagerType" correctly and that all relevant plugins are
> in the CLASSPATH
>
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年03月2日 11:50,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> 从你贴的log来看似乎是创建了embedded metastore。可以检查一下HiveCatalog是不是读到了不正确的hive
> conf?另外你贴的maven的这些依赖都打到你flink作业的jar里了么?像datanucleus的依赖应该是不需要的。
>
> On Sat, Feb 29, 2020 at 10:42 PM 叶贤勋 <yx...@163.com> wrote:
>
> Hi 李锐,感谢你的回复。
> 前面的问题通过设置yarn.resourcemanager.principal,已经解决。
> 但是现在出现另外一个问题,请帮忙看看。
>
>
> 背景:flink任务还是source&sink带有kerberos的hive,相同代码在本地进行测试是能通过kerberos认证,并且能够查询和插入数据到hive。但是任务提交到集群就报kerberos认证失败的错误。
> Flink:1.9.1, flink-1.9.1/lib/有flink-dist_2.11-1.9.1.jar,
> flink-shaded-hadoop-2-uber-2.7.5-7.0.jar,log4j-1.2.17.jar,
> slf4j-log4j12-1.7.15.jar
> Hive:2.1.1
> flink任务主要依赖的jar:
> [INFO] +- org.apache.flink:flink-table-api-java:jar:flink-1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-table-common:jar:flink-1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-core:jar:flink-1.9.1:compile
> [INFO] |  |     +-
> org.apache.flink:flink-annotations:jar:flink-1.9.1:compile
> [INFO] |  |     +-
> org.apache.flink:flink-metrics-core:jar:flink-1.9.1:compile
> [INFO] |  |     \- com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [INFO] |  |        +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
> [INFO] |  |        \- org.objenesis:objenesis:jar:2.1:compile
> [INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [INFO] |  \- org.apache.flink:force-shading:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-table-planner-blink_2.11:jar:flink-1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-scala_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.scala-lang:scala-reflect:jar:2.11.12:compile
> [INFO] |  |  \- org.scala-lang:scala-compiler:jar:2.11.12:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-java-bridge_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.apache.flink:flink-java:jar:flink-1.9.1:compile
> [INFO] |  |  \-
> org.apache.flink:flink-streaming-java_2.11:jar:1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-api-scala-bridge_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-scala_2.11:jar:flink-1.9.1:compile
> [INFO] |  +-
> org.apache.flink:flink-table-runtime-blink_2.11:jar:flink-1.9.1:compile
> [INFO] |  |  +- org.codehaus.janino:janino:jar:3.0.9:compile
> [INFO] |  |  \- org.apache.calcite.avatica:avatica-core:jar:1.15.0:compile
> [INFO] |  \- org.reflections:reflections:jar:0.9.10:compile
> [INFO] +- org.apache.flink:flink-table-planner_2.11:jar:flink-1.9.1:compile
> [INFO] +- org.apache.commons:commons-lang3:jar:3.9:compile
> [INFO] +- com.typesafe.akka:akka-actor_2.11:jar:2.5.21:compile
> [INFO] |  +- org.scala-lang:scala-library:jar:2.11.8:compile
> [INFO] |  +- com.typesafe:config:jar:1.3.3:compile
> [INFO] |  \-
> org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
> [INFO] +- org.apache.flink:flink-sql-client_2.11:jar:1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-clients_2.11:jar:1.9.1:compile
> [INFO] |  |  \- org.apache.flink:flink-optimizer_2.11:jar:1.9.1:compile
> [INFO] |  +- org.apache.flink:flink-streaming-scala_2.11:jar:1.9.1:compile
> [INFO] |  +- log4j:log4j:jar:1.2.17:compile
> [INFO] |  \- org.apache.flink:flink-shaded-jackson:jar:2.9.8-7.0:compile
> [INFO] +- org.apache.flink:flink-json:jar:1.9.1:compile
> [INFO] +- org.apache.flink:flink-csv:jar:1.9.1:compile
> [INFO] +- org.apache.flink:flink-hbase_2.11:jar:1.9.1:compile
> [INFO] +- org.apache.hbase:hbase-server:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-protobuf:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-netty:jar:2.2.1:compile
> [INFO] |  +-
> org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.2.1:compile
> [INFO] |  |  \-
> com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
> [INFO] |  +- org.apache.hbase:hbase-common:jar:2.2.1:compile
> [INFO] |  |  \-
> com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
> [INFO] |  +- org.apache.hbase:hbase-http:jar:2.2.1:compile
> [INFO] |  |  +- org.eclipse.jetty:jetty-util:jar:9.3.27.v20190418:compile
> [INFO] |  |  +-
> org.eclipse.jetty:jetty-util-ajax:jar:9.3.27.v20190418:compile
> [INFO] |  |  +- org.eclipse.jetty:jetty-http:jar:9.3.27.v20190418:compile
> [INFO] |  |  +-
> org.eclipse.jetty:jetty-security:jar:9.3.27.v20190418:compile
> [INFO] |  |  +- org.glassfish.jersey.core:jersey-server:jar:2.25.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile
> [INFO] |  |  |  |  +-
> org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile
> [INFO] |  |  |  |  \-
> org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.core:jersey-client:jar:2.25.1:compile
> [INFO] |  |  |  +-
> org.glassfish.jersey.media:jersey-media-jaxb:jar:2.25.1:compile
> [INFO] |  |  |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
> [INFO] |  |  |  +- org.glassfish.hk2:hk2-api:jar:2.5.0-b32:compile
> [INFO] |  |  |  |  +- org.glassfish.hk2:hk2-utils:jar:2.5.0-b32:compile
> [INFO] |  |  |  |  \-
> org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32:compile
> [INFO] |  |  |  +-
> org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile
> [INFO] |  |  |  \- org.glassfish.hk2:hk2-locator:jar:2.5.0-b32:compile
> [INFO] |  |  +-
>
> org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile
> [INFO] |  |  \- javax.ws.rs:javax.ws.rs-api:jar:2.0.1:compile
> [INFO] |  +- org.apache.hbase:hbase-protocol:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-protocol-shaded:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-procedure:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-client:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-zookeeper:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-replication:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-metrics-api:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-metrics:jar:2.2.1:compile
> [INFO] |  +- commons-codec:commons-codec:jar:1.10:compile
> [INFO] |  +- org.apache.hbase:hbase-hadoop-compat:jar:2.2.1:compile
> [INFO] |  +- org.apache.hbase:hbase-hadoop2-compat:jar:2.2.1:compile
> [INFO] |  +- org.eclipse.jetty:jetty-server:jar:9.3.27.v20190418:compile
> [INFO] |  |  \- org.eclipse.jetty:jetty-io:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.eclipse.jetty:jetty-servlet:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.eclipse.jetty:jetty-webapp:jar:9.3.27.v20190418:compile
> [INFO] |  |  \- org.eclipse.jetty:jetty-xml:jar:9.3.27.v20190418:compile
> [INFO] |  +- org.glassfish.web:javax.servlet.jsp:jar:2.3.2:compile
> [INFO] |  |  \- org.glassfish:javax.el:jar:3.0.1-b11:compile (version
> selected from constraint [3.0.0,))
> [INFO] |  +- javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1:compile
> [INFO] |  +- io.dropwizard.metrics:metrics-core:jar:3.2.6:compile
> [INFO] |  +- commons-io:commons-io:jar:2.5:compile
> [INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:compile
> [INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
> [INFO] |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile
> [INFO] |  +- org.apache.htrace:htrace-core4:jar:4.2.0-incubating:compile
> [INFO] |  +- com.lmax:disruptor:jar:3.3.6:compile
> [INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
> [INFO] |  +- org.apache.commons:commons-crypto:jar:1.0.0:compile
> [INFO] |  +- org.apache.hadoop:hadoop-distcp:jar:2.8.5:compile
> [INFO] |  \- org.apache.yetus:audience-annotations:jar:0.5.0:compile
> [INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
> [INFO] +- mysql:mysql-connector-java:jar:8.0.18:compile
> [INFO] +- org.apache.flink:flink-connector-hive_2.11:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-hadoop-compatibility_2.11:jar:1.9.1:compile
> [INFO] +-
> org.apache.flink:flink-shaded-hadoop-2-uber:jar:2.7.5-7.0:provided
> [INFO] +- org.apache.hive:hive-exec:jar:2.1.1:compile
> [INFO] |  +- org.apache.hive:hive-ant:jar:2.1.1:compile
> [INFO] |  |  \- org.apache.velocity:velocity:jar:1.5:compile
> [INFO] |  |     \- oro:oro:jar:2.0.8:compile
> [INFO] |  +- org.apache.hive:hive-llap-tez:jar:2.1.1:compile
> [INFO] |  |  +- org.apache.hive:hive-common:jar:2.1.1:compile
> [INFO] |  |  |  +- org.apache.hive:hive-storage-api:jar:2.1.1:compile
> [INFO] |  |  |  +- org.apache.hive:hive-orc:jar:2.1.1:compile
> [INFO] |  |  |  |  \- org.iq80.snappy:snappy:jar:0.2:compile
> [INFO] |  |  |  +-
> org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-jta_1.1_spec:jar:1.1.1:compile
> [INFO] |  |  |  |  +- javax.mail:mail:jar:1.4.1:compile
> [INFO] |  |  |  |  +- javax.activation:activation:jar:1.1:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-jaspic_1.0_spec:jar:1.0:compile
> [INFO] |  |  |  |  +-
> org.apache.geronimo.specs:geronimo-annotation_1.0_spec:jar:1.1.1:compile
> [INFO] |  |  |  |  \- asm:asm-commons:jar:3.1:compile
> [INFO] |  |  |  |     \- asm:asm-tree:jar:3.1:compile
> [INFO] |  |  |  |        \- asm:asm:jar:3.1:compile
> [INFO] |  |  |  +-
> org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
> [INFO] |  |  |  +- joda-time:joda-time:jar:2.8.1:compile
> [INFO] |  |  |  +- org.json:json:jar:20160810:compile
> [INFO] |  |  |  +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
> [INFO] |  |  |  +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
> [INFO] |  |  |  \-
>
> com.github.joshelser:dropwizard-metrics-hadoop-metrics2-reporter:jar:0.1.2:compile
> [INFO] |  |  \- org.apache.hive:hive-llap-client:jar:2.1.1:compile
> [INFO] |  |     \- org.apache.hive:hive-llap-common:jar:2.1.1:compile
> [INFO] |  |        \- org.apache.hive:hive-serde:jar:2.1.1:compile
> [INFO] |  |           +- org.apache.hive:hive-service-rpc:jar:2.1.1:compile
> [INFO] |  |           |  +- tomcat:jasper-compiler:jar:5.5.23:compile
> [INFO] |  |           |  |  +- javax.servlet:jsp-api:jar:2.0:compile
> [INFO] |  |           |  |  \- ant:ant:jar:1.6.5:compile
> [INFO] |  |           |  +- tomcat:jasper-runtime:jar:5.5.23:compile
> [INFO] |  |           |  |  +- javax.servlet:servlet-api:jar:2.4:compile
> [INFO] |  |           |  |  \- commons-el:commons-el:jar:1.0:compile
> [INFO] |  |           |  \- org.apache.thrift:libfb303:jar:0.9.3:compile
> [INFO] |  |           +- org.apache.avro:avro:jar:1.7.7:compile
> [INFO] |  |           |  \-
> com.thoughtworks.paranamer:paranamer:jar:2.3:compile
> [INFO] |  |           +- net.sf.opencsv:opencsv:jar:2.3:compile
> [INFO] |  |           \-
> org.apache.parquet:parquet-hadoop-bundle:jar:1.8.1:compile
> [INFO] |  +- org.apache.hive:hive-shims:jar:2.1.1:compile
> [INFO] |  |  +- org.apache.hive.shims:hive-shims-common:jar:2.1.1:compile
> [INFO] |  |  |  \- org.apache.thrift:libthrift:jar:0.9.3:compile
> [INFO] |  |  +- org.apache.hive.shims:hive-shims-0.23:jar:2.1.1:runtime
> [INFO] |  |  |  \-
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.6.1:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-annotations:jar:2.6.1:runtime
> [INFO] |  |  |     +-
> com.google.inject.extensions:guice-servlet:jar:3.0:runtime
> [INFO] |  |  |     +- com.google.inject:guice:jar:3.0:runtime
> [INFO] |  |  |     |  +- javax.inject:javax.inject:jar:1:runtime
> [INFO] |  |  |     |  \- aopalliance:aopalliance:jar:1.0:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-json:jar:1.9:runtime
> [INFO] |  |  |     |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:runtime
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-core-asl:jar:1.8.3:compile
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-mapper-asl:jar:1.8.3:compile
> [INFO] |  |  |     |  +-
> org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:runtime
> [INFO] |  |  |     |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:runtime
> [INFO] |  |  |     +- com.sun.jersey.contribs:jersey-guice:jar:1.9:runtime
> [INFO] |  |  |     |  \- com.sun.jersey:jersey-server:jar:1.9:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-yarn-common:jar:2.6.1:runtime
> [INFO] |  |  |     +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.1:runtime
> [INFO] |  |  |     +- javax.xml.bind:jaxb-api:jar:2.2.2:runtime
> [INFO] |  |  |     |  \- javax.xml.stream:stax-api:jar:1.0-2:runtime
> [INFO] |  |  |     +- org.codehaus.jettison:jettison:jar:1.1:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-core:jar:1.9:runtime
> [INFO] |  |  |     +- com.sun.jersey:jersey-client:jar:1.9:runtime
> [INFO] |  |  |     +- org.mortbay.jetty:jetty-util:jar:6.1.26:runtime
> [INFO] |  |  |     +-
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.1:runtime
> [INFO] |  |  |     |  \-
> org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:runtime
> [INFO] |  |  |     +-
>
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:2.6.1:runtime
> [INFO] |  |  |     \-
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:2.6.1:runtime
> [INFO] |  |  |        \- org.mortbay.jetty:jetty:jar:6.1.26:runtime
> [INFO] |  |  \-
> org.apache.hive.shims:hive-shims-scheduler:jar:2.1.1:runtime
> [INFO] |  +- commons-httpclient:commons-httpclient:jar:3.0.1:compile
> [INFO] |  +- org.antlr:antlr-runtime:jar:3.4:compile
> [INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:compile
> [INFO] |  |  \- antlr:antlr:jar:2.7.7:compile
> [INFO] |  +- org.antlr:ST4:jar:4.0.4:compile
> [INFO] |  +- org.apache.ant:ant:jar:1.9.1:compile
> [INFO] |  |  \- org.apache.ant:ant-launcher:jar:1.9.1:compile
> [INFO] |  +- org.apache.commons:commons-compress:jar:1.10:compile
> [INFO] |  +- org.apache.ivy:ivy:jar:2.4.0:compile
> [INFO] |  +- org.apache.curator:curator-framework:jar:2.6.0:compile
> [INFO] |  |  \- org.apache.curator:curator-client:jar:2.6.0:compile
> [INFO] |  +- org.apache.curator:apache-curator:pom:2.6.0:compile
> [INFO] |  +- org.codehaus.groovy:groovy-all:jar:2.4.4:compile
> [INFO] |  +- org.apache.calcite:calcite-core:jar:1.6.0:compile
> [INFO] |  |  +- org.apache.calcite:calcite-linq4j:jar:1.6.0:compile
> [INFO] |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile
> [INFO] |  |  |  \- commons-pool:commons-pool:jar:1.5.4:compile
> [INFO] |  |  +- net.hydromatic:aggdesigner-algorithm:jar:6.0:compile
> [INFO] |  |  \- org.codehaus.janino:commons-compiler:jar:2.7.6:compile
> [INFO] |  +- org.apache.calcite:calcite-avatica:jar:1.6.0:compile
> [INFO] |  +- stax:stax-api:jar:1.0.1:compile
> [INFO] |  \- jline:jline:jar:2.12:compile
> [INFO] +- org.datanucleus:datanucleus-core:jar:4.1.6:compile
> [INFO] +- org.datanucleus:datanucleus-api-jdo:jar:4.2.4:compile
> [INFO] +- org.datanucleus:javax.jdo:jar:3.2.0-m3:compile
> [INFO] |  \- javax.transaction:transaction-api:jar:1.1:compile
> [INFO] +- org.datanucleus:datanucleus-rdbms:jar:4.1.9:compile
> [INFO] +- hadoop-lzo:hadoop-lzo:jar:0.4.14:compile
> [INFO] \- org.apache.flink:flink-runtime-web_2.11:jar:1.9.1:provided
> [INFO]    +- org.apache.flink:flink-runtime_2.11:jar:1.9.1:compile
> [INFO]    |  +-
> org.apache.flink:flink-queryable-state-client-java:jar:1.9.1:compile
> [INFO]    |  +- org.apache.flink:flink-hadoop-fs:jar:1.9.1:compile
> [INFO]    |  +- org.apache.flink:flink-shaded-asm-6:jar:6.2.1-7.0:compile
> [INFO]    |  +- com.typesafe.akka:akka-stream_2.11:jar:2.5.21:compile
> [INFO]    |  |  +- org.reactivestreams:reactive-streams:jar:1.0.2:compile
> [INFO]    |  |  \- com.typesafe:ssl-config-core_2.11:jar:0.3.7:compile
> [INFO]    |  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21:compile
> [INFO]    |  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.11:compile
> [INFO]    |  +- org.clapper:grizzled-slf4j_2.11:jar:1.3.2:compile
> [INFO]    |  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
> [INFO]    |  +- org.xerial.snappy:snappy-java:jar:1.1.4:compile
> [INFO]    |  \- com.twitter:chill_2.11:jar:0.7.6:compile
> [INFO]    |     \- com.twitter:chill-java:jar:0.7.6:compile
> [INFO]    +-
> org.apache.flink:flink-shaded-netty:jar:4.1.32.Final-7.0:compile
> [INFO]    +- org.apache.flink:flink-shaded-guava:jar:18.0-7.0:compile
> [INFO]    \- org.javassist:javassist:jar:3.19.0-GA:compile
> [INFO] ————————————————————————————————————
>
> 日志:
>
> 2020-02-28 17:17:07,890 INFO
> org.apache.hadoop.security.UserGroupInformation               - Login
> successful for user ***/dev@***.COM using keytab file /home/***/key.keytab
> 上面这条是flink日志打印的,从这条日志可以看出 kerberos认证是通过的,能够正常登录,但还是报了以下异常:
> 2020-02-28 17:17:08,658 INFO  org.apache.hadoop.hive.metastore.ObjectStore
> - Setting MetaStore object pin classes with
> hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
> 2020-02-28 17:17:09,280 INFO
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql           - Using
> direct SQL, underlying DB is MYSQL
> 2020-02-28 17:17:09,283 INFO  org.apache.hadoop.hive.metastore.ObjectStore
> - Initialized ObjectStore
> 2020-02-28 17:17:09,450 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
> admin role in metastore
> 2020-02-28 17:17:09,452 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
> public role in metastore
> 2020-02-28 17:17:09,474 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - No user is
> added in admin role, since config is empty
> 2020-02-28 17:17:09,634 INFO
> org.apache.flink.table.catalog.hive.HiveCatalog               - Connected
> to Hive metastore
> 2020-02-28 17:17:09,635 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore                - 0:
> get_database: ***
> 2020-02-28 17:17:09,637 INFO
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit          - ugi=***
> ip=unknown-ip-addr cmd=get_database: ***
> 2020-02-28 17:17:09,658 INFO  org.apache.hadoop.hive.ql.metadata.HiveUtils
> - Adding metastore authorization provider:
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
> 2020-02-28 17:17:10,166 WARN
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory       - The
> short-circuit local reads feature cannot be used because libhadoop cannot
> be loaded.
> 2020-02-28 17:17:10,391 WARN  org.apache.hadoop.ipc.Client
> - Exception encountered while connecting to the server :
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 2020-02-28 17:17:10,397 WARN  org.apache.hadoop.ipc.Client
> - Exception encountered while connecting to the server :
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> 2020-02-28 17:17:10,398 INFO
> org.apache.hadoop.io.retry.RetryInvocationHandler             - Exception
> while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over
> ******.org/***.***.***.***:8020 after 1 fail over attempts. Trying to fail
> over immediately.
> java.io.IOException: Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
> "***.***.***.org/***.***.***.***"; destination host is: "******.org":8020;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1480)
> at org.apache.hadoop.ipc.Client.call(Client.java:1413)
> at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy41.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
> getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy42.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
> at
>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
> at
>
> org.apache.hadoop.hive.common.FileUtils.getFileStatusOrNull(FileUtils.java:770)
> at
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:368)
> at
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:343)
> at
>
> org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:152)
> at
>
> org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadDatabase(AuthorizationPreEventListener.java:204)
> at
>
> org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:152)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2153)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:932)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
> at
>
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
> at com.sun.proxy.$Proxy35.get_database(Unknown Source)
> at
>
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1280)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:150)
> at com.sun.proxy.$Proxy36.getDatabase(Unknown Source)
> at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.
> getDatabase(HiveMetastoreClientWrapper.java:102)
> at
>
> org.apache.flink.table.catalog.hive.HiveCatalog.databaseExists(HiveCatalog.java:347)
> at
> org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:244)
> at
>
> org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
> at
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)
>
>
> ……在这段省略的代码里做了UserGroupInformation.loginUserFromKeytab(principal,keytab);并成功通过认证
> at this is my code.main(MyMainClass.java:24)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
> at
>
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
> at
>
> org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
> at
>
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
> at
>
> org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
> at
>
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
> at
>
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
> Caused by: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN, KERBEROS]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at
>
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> ... 67 more
> Caused by: org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN, KERBEROS]
> at
>
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
> at
>
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
> at
>
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
> ... 70 more
> 目前诊断看起来像是jar被污染导致。麻烦请指点一二。谢谢!
>
> 叶贤勋
> yxx_cmhd@163.com
>
> <
> https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D
> >
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>
> 在2020年02月28日 15:16,Rui Li<li...@apache.org> <li...@apache.org> 写道:
>
> Hi 叶贤勋,
>
>
>
> 我手头上没有kerberos的环境,从TokenCache的代码(2.7.5版本)看起来,这个异常可能是因为没有正确拿到RM的地址或者principal。请检查一下下面这几个配置:
> mapreduce.framework.name
> yarn.resourcemanager.address
> yarn.resourcemanager.principal
> 以及你的flink的作业是否能读到这些配置
>
> On Fri, Feb 28, 2020 at 11:10 AM Kurt Young <yk...@gmail.com> wrote:
>
> cc @lirui@apache.org <li...@apache.org>
>
> Best,
> Kurt
>
>
> On Thu, Feb 13, 2020 at 10:22 AM 叶贤勋 <yx...@163.com> wrote:
>
> Hi 大家好:
> 在做hive2.1.1 source带Kerberos认证有个异常请教下大家。
> flink 版本1.9
> hive 版本2.1.1,实现了HiveShimV211。
> 代码:
> public class HiveCatalogTest {
> private static final Logger LOG =
> LoggerFactory.getLogger(HiveCatalogTest.class);
> private String hiveConfDir = "/Users/yexianxun/dev/env/test-hive"; //
> a local path
> private TableEnvironment tableEnv;
> private HiveCatalog hive;
> private String hiveName;
> private String hiveDB;
> private String version;
>
>
> @Before
> public void before() {
> EnvironmentSettings settings =
> EnvironmentSettings.newInstance()
> .useBlinkPlanner()
> .inBatchMode()
> .build();
> tableEnv = TableEnvironment.create(settings);
> hiveName = "myhive";
> hiveDB = "sloth";
> version = "2.1.1";
> }
>
>
> @Test
> public void testCatalogQuerySink() throws Exception {
> hive = new HiveCatalog(hiveName, hiveDB, hiveConfDir, version);
> System.setProperty("java.security.krb5.conf", hiveConfDir +
> "/krb5.conf");
> tableEnv.getConfig().getConfiguration().setString("stream_mode",
> "false");
> tableEnv.registerCatalog(hiveName, hive);
> tableEnv.useCatalog(hiveName);
> String query = "select * from " + hiveName + "." + hiveDB +
> ".testtbl2 where id = 20200202";
> Table table = tableEnv.sqlQuery(query);
> String newTableName = "testtbl2_1";
> table.insertInto(hiveName, hiveDB, newTableName);
> tableEnv.execute("test");
> }
> }
>
>
> HiveMetastoreClientFactory:
> public static HiveMetastoreClientWrapper create(HiveConf hiveConf,
> String hiveVersion) {
> Preconditions.checkNotNull(hiveVersion, "Hive version cannot be
> null");
> if (System.getProperty("java.security.krb5.conf") != null) {
> if (System.getProperty("had_set_kerberos") == null) {
> String principal = "sloth/dev@BDMS.163.COM";
> String keytab =
> "/Users/yexianxun/dev/env/mammut-test-hive/sloth.keytab";
> try {
> sun.security.krb5.Config.refresh();
> UserGroupInformation.setConfiguration(hiveConf);
> UserGroupInformation.loginUserFromKeytab(principal,
> keytab);
> System.setProperty("had_set_kerberos", "true");
> } catch (Exception e) {
> LOG.error("", e);
> }
> }
> }
> return new HiveMetastoreClientWrapper(hiveConf, hiveVersion);
> }
>
>
> HiveCatalog:
> private static HiveConf createHiveConf(@Nullable String hiveConfDir) {
> LOG.info("Setting hive conf dir as {}", hiveConfDir);
> try {
> HiveConf.setHiveSiteLocation(
> hiveConfDir == null ?
> null : Paths.get(hiveConfDir,
> "hive-site.xml").toUri().toURL());
> } catch (MalformedURLException e) {
> throw new CatalogException(
> String.format("Failed to get hive-site.xml from %s",
> hiveConfDir), e);
> }
>
>
> // create HiveConf from hadoop configuration
> HiveConf hiveConf = new
> HiveConf(HadoopUtils.getHadoopConfiguration(new
> org.apache.flink.configuration.Configuration()),
> HiveConf.class);
> try {
> hiveConf.addResource(Paths.get(hiveConfDir,
> "hdfs-site.xml").toUri().toURL());
> hiveConf.addResource(Paths.get(hiveConfDir,
> "core-site.xml").toUri().toURL());
> } catch (MalformedURLException e) {
> throw new CatalogException(String.format("Failed to get
> hdfs|core-site.xml from %s", hiveConfDir), e);
> }
> return hiveConf;
> }
>
>
> 在执行testCatalogQuerySink方法报以下错误:
> org.apache.flink.runtime.client.JobExecutionException: Could not retrieve
> JobResult.
>
>
> at
>
>
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:622)
> at
>
>
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
> at
>
>
> org.apache.flink.table.planner.delegation.BatchExecutor.execute(BatchExecutor.java:55)
> at
>
>
> org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:410)
> at api.HiveCatalogTest.testCatalogQuerySink(HiveCatalogMumTest.java:234)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
>
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at
>
>
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
>
>
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at
>
>
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at
>
>
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at
>
>
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at
>
>
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at
>
>
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at
>
>
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at
>
>
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed
> to submit job.
> at
>
>
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
> at
>
>
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> at
>
>
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
> at
>
>
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
> at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
> at
>
>
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at
>
>
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
>
>
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.RuntimeException:
> org.apache.flink.runtime.client.JobExecutionException: Could not set up
> JobManager
> at
>
>
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
> at
>
>
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> ... 6 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
> not set up JobManager
> at
>
>
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
> at
>
>
> org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
> at
>
>
> org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
> at
>
>
> org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
> ... 7 more
> Caused by: org.apache.flink.runtime.JobException: Creating the input
> splits caused an error: Can't get Master Kerberos principal for use as
> renewer
> at
>
>
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:270)
> at
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:907)
> at
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:230)
> at
>
>
> org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
> at
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
> at
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
> at
>
>
> org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
> at
>
>
> org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
> at
>
>
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
> at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
> at
>
>
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
> at
>
>
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
> at
>
>
> org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
> ... 10 more
> Caused by: java.io.IOException: Can't get Master Kerberos principal for
> use as renewer
> at
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
> at
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
> at
>
>
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
> at
>
>
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
> at
>
>
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
> at
>
>
> org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:159)
> at
>
>
> org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:63)
> at
>
>
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:256)
> ... 22 more
>
>
> 测试sink的方法是能够正常插入数据,但是在hive source时报这个错误,感觉是获取deleg
> token时返回空导致的。不知道具体应该怎么解决
>
>
>
>
>
> | |
> 叶贤勋
> |
> |
> yxx_cmhd@163.com
> |
> 签名由网易邮箱大师定制
>
>
>
>

回复: Hive Source With Kerberos认证问题

Posted by 叶贤勋 <yx...@163.com>.
这是我和flink社区沟通的记录,你可以看下。


| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制


在2020年03月3日 16:00,叶贤勋<yx...@163.com> 写道:
hive conf应该是对的,前面UserGroupInfomation登录时都是成功的。
datanucleus的依赖不加的话,会报claas not found等异常。
1、java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
2、Caused by: org.datanucleus.exceptions.NucleusUserException: There is no available StoreManager of type "rdbms". Please make sure you have specified "datanucleus.storeManagerType" correctly and that all relevant plugins are in the CLASSPATH



| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制


在2020年03月2日 11:50,Rui Li<li...@apache.org> 写道:
从你贴的log来看似乎是创建了embedded metastore。可以检查一下HiveCatalog是不是读到了不正确的hive
conf?另外你贴的maven的这些依赖都打到你flink作业的jar里了么?像datanucleus的依赖应该是不需要的。

On Sat, Feb 29, 2020 at 10:42 PM 叶贤勋 <yx...@163.com> wrote:

Hi 李锐,感谢你的回复。
前面的问题通过设置yarn.resourcemanager.principal,已经解决。
但是现在出现另外一个问题,请帮忙看看。

背景:flink任务还是source&sink带有kerberos的hive,相同代码在本地进行测试是能通过kerberos认证,并且能够查询和插入数据到hive。但是任务提交到集群就报kerberos认证失败的错误。
Flink:1.9.1, flink-1.9.1/lib/有flink-dist_2.11-1.9.1.jar,
flink-shaded-hadoop-2-uber-2.7.5-7.0.jar,log4j-1.2.17.jar,
slf4j-log4j12-1.7.15.jar
Hive:2.1.1
flink任务主要依赖的jar:
[INFO] +- org.apache.flink:flink-table-api-java:jar:flink-1.9.1:compile
[INFO] |  +- org.apache.flink:flink-table-common:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-core:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-annotations:jar:flink-1.9.1:compile
[INFO] |  |     +-
org.apache.flink:flink-metrics-core:jar:flink-1.9.1:compile
[INFO] |  |     \- com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
[INFO] |  |        +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
[INFO] |  |        \- org.objenesis:objenesis:jar:2.1:compile
[INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
[INFO] |  \- org.apache.flink:force-shading:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-table-planner-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.scala-lang:scala-reflect:jar:2.11.12:compile
[INFO] |  |  \- org.scala-lang:scala-compiler:jar:2.11.12:compile
[INFO] |  +-
org.apache.flink:flink-table-api-java-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.apache.flink:flink-java:jar:flink-1.9.1:compile
[INFO] |  |  \-
org.apache.flink:flink-streaming-java_2.11:jar:1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-api-scala-bridge_2.11:jar:flink-1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-scala_2.11:jar:flink-1.9.1:compile
[INFO] |  +-
org.apache.flink:flink-table-runtime-blink_2.11:jar:flink-1.9.1:compile
[INFO] |  |  +- org.codehaus.janino:janino:jar:3.0.9:compile
[INFO] |  |  \- org.apache.calcite.avatica:avatica-core:jar:1.15.0:compile
[INFO] |  \- org.reflections:reflections:jar:0.9.10:compile
[INFO] +- org.apache.flink:flink-table-planner_2.11:jar:flink-1.9.1:compile
[INFO] +- org.apache.commons:commons-lang3:jar:3.9:compile
[INFO] +- com.typesafe.akka:akka-actor_2.11:jar:2.5.21:compile
[INFO] |  +- org.scala-lang:scala-library:jar:2.11.8:compile
[INFO] |  +- com.typesafe:config:jar:1.3.3:compile
[INFO] |  \-
org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
[INFO] +- org.apache.flink:flink-sql-client_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-clients_2.11:jar:1.9.1:compile
[INFO] |  |  \- org.apache.flink:flink-optimizer_2.11:jar:1.9.1:compile
[INFO] |  +- org.apache.flink:flink-streaming-scala_2.11:jar:1.9.1:compile
[INFO] |  +- log4j:log4j:jar:1.2.17:compile
[INFO] |  \- org.apache.flink:flink-shaded-jackson:jar:2.9.8-7.0:compile
[INFO] +- org.apache.flink:flink-json:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-csv:jar:1.9.1:compile
[INFO] +- org.apache.flink:flink-hbase_2.11:jar:1.9.1:compile
[INFO] +- org.apache.hbase:hbase-server:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-protobuf:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-netty:jar:2.2.1:compile
[INFO] |  +-
org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.2.1:compile
[INFO] |  |  \-
com.google.errorprone:error_prone_annotations:jar:2.3.3:compile
[INFO] |  +- org.apache.hbase:hbase-common:jar:2.2.1:compile
[INFO] |  |  \-
com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
[INFO] |  +- org.apache.hbase:hbase-http:jar:2.2.1:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-util:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-util-ajax:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.eclipse.jetty:jetty-http:jar:9.3.27.v20190418:compile
[INFO] |  |  +-
org.eclipse.jetty:jetty-security:jar:9.3.27.v20190418:compile
[INFO] |  |  +- org.glassfish.jersey.core:jersey-server:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-common:jar:2.25.1:compile
[INFO] |  |  |  |  +-
org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.core:jersey-client:jar:2.25.1:compile
[INFO] |  |  |  +-
org.glassfish.jersey.media:jersey-media-jaxb:jar:2.25.1:compile
[INFO] |  |  |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
[INFO] |  |  |  +- org.glassfish.hk2:hk2-api:jar:2.5.0-b32:compile
[INFO] |  |  |  |  +- org.glassfish.hk2:hk2-utils:jar:2.5.0-b32:compile
[INFO] |  |  |  |  \-
org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32:compile
[INFO] |  |  |  +-
org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32:compile
[INFO] |  |  |  \- org.glassfish.hk2:hk2-locator:jar:2.5.0-b32:compile
[INFO] |  |  +-
org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.25.1:compile
[INFO] |  |  \- javax.ws.rs:javax.ws.rs-api:jar:2.0.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-protocol-shaded:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-procedure:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-client:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-zookeeper:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-replication:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics-api:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-metrics:jar:2.2.1:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.10:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop-compat:jar:2.2.1:compile
[INFO] |  +- org.apache.hbase:hbase-hadoop2-compat:jar:2.2.1:compile
[INFO] |  +- org.eclipse.jetty:jetty-server:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-io:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-servlet:jar:9.3.27.v20190418:compile
[INFO] |  +- org.eclipse.jetty:jetty-webapp:jar:9.3.27.v20190418:compile
[INFO] |  |  \- org.eclipse.jetty:jetty-xml:jar:9.3.27.v20190418:compile
[INFO] |  +- org.glassfish.web:javax.servlet.jsp:jar:2.3.2:compile
[INFO] |  |  \- org.glassfish:javax.el:jar:3.0.1-b11:compile (version
selected from constraint [3.0.0,))
[INFO] |  +- javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1:compile
[INFO] |  +- io.dropwizard.metrics:metrics-core:jar:3.2.6:compile
[INFO] |  +- commons-io:commons-io:jar:2.5:compile
[INFO] |  +- org.apache.commons:commons-math3:jar:3.6.1:compile
[INFO] |  +- org.apache.zookeeper:zookeeper:jar:3.4.10:compile
[INFO] |  +- javax.servlet:javax.servlet-api:jar:3.1.0:compile
[INFO] |  +- org.apache.htrace:htrace-core4:jar:4.2.0-incubating:compile
[INFO] |  +- com.lmax:disruptor:jar:3.3.6:compile
[INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
[INFO] |  +- org.apache.commons:commons-crypto:jar:1.0.0:compile
[INFO] |  +- org.apache.hadoop:hadoop-distcp:jar:2.8.5:compile
[INFO] |  \- org.apache.yetus:audience-annotations:jar:0.5.0:compile
[INFO] +- com.google.protobuf:protobuf-java:jar:2.5.0:compile
[INFO] +- mysql:mysql-connector-java:jar:8.0.18:compile
[INFO] +- org.apache.flink:flink-connector-hive_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-hadoop-compatibility_2.11:jar:1.9.1:compile
[INFO] +-
org.apache.flink:flink-shaded-hadoop-2-uber:jar:2.7.5-7.0:provided
[INFO] +- org.apache.hive:hive-exec:jar:2.1.1:compile
[INFO] |  +- org.apache.hive:hive-ant:jar:2.1.1:compile
[INFO] |  |  \- org.apache.velocity:velocity:jar:1.5:compile
[INFO] |  |     \- oro:oro:jar:2.0.8:compile
[INFO] |  +- org.apache.hive:hive-llap-tez:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive:hive-common:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-storage-api:jar:2.1.1:compile
[INFO] |  |  |  +- org.apache.hive:hive-orc:jar:2.1.1:compile
[INFO] |  |  |  |  \- org.iq80.snappy:snappy:jar:0.2:compile
[INFO] |  |  |  +-
org.eclipse.jetty.aggregate:jetty-all:jar:7.6.0.v20120127:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jta_1.1_spec:jar:1.1.1:compile
[INFO] |  |  |  |  +- javax.mail:mail:jar:1.4.1:compile
[INFO] |  |  |  |  +- javax.activation:activation:jar:1.1:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-jaspic_1.0_spec:jar:1.0:compile
[INFO] |  |  |  |  +-
org.apache.geronimo.specs:geronimo-annotation_1.0_spec:jar:1.1.1:compile
[INFO] |  |  |  |  \- asm:asm-commons:jar:3.1:compile
[INFO] |  |  |  |     \- asm:asm-tree:jar:3.1:compile
[INFO] |  |  |  |        \- asm:asm:jar:3.1:compile
[INFO] |  |  |  +-
org.eclipse.jetty.orbit:javax.servlet:jar:3.0.0.v201112011016:compile
[INFO] |  |  |  +- joda-time:joda-time:jar:2.8.1:compile
[INFO] |  |  |  +- org.json:json:jar:20160810:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-jvm:jar:3.1.0:compile
[INFO] |  |  |  +- io.dropwizard.metrics:metrics-json:jar:3.1.0:compile
[INFO] |  |  |  \-
com.github.joshelser:dropwizard-metrics-hadoop-metrics2-reporter:jar:0.1.2:compile
[INFO] |  |  \- org.apache.hive:hive-llap-client:jar:2.1.1:compile
[INFO] |  |     \- org.apache.hive:hive-llap-common:jar:2.1.1:compile
[INFO] |  |        \- org.apache.hive:hive-serde:jar:2.1.1:compile
[INFO] |  |           +- org.apache.hive:hive-service-rpc:jar:2.1.1:compile
[INFO] |  |           |  +- tomcat:jasper-compiler:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:jsp-api:jar:2.0:compile
[INFO] |  |           |  |  \- ant:ant:jar:1.6.5:compile
[INFO] |  |           |  +- tomcat:jasper-runtime:jar:5.5.23:compile
[INFO] |  |           |  |  +- javax.servlet:servlet-api:jar:2.4:compile
[INFO] |  |           |  |  \- commons-el:commons-el:jar:1.0:compile
[INFO] |  |           |  \- org.apache.thrift:libfb303:jar:0.9.3:compile
[INFO] |  |           +- org.apache.avro:avro:jar:1.7.7:compile
[INFO] |  |           |  \-
com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO] |  |           +- net.sf.opencsv:opencsv:jar:2.3:compile
[INFO] |  |           \-
org.apache.parquet:parquet-hadoop-bundle:jar:1.8.1:compile
[INFO] |  +- org.apache.hive:hive-shims:jar:2.1.1:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-common:jar:2.1.1:compile
[INFO] |  |  |  \- org.apache.thrift:libthrift:jar:0.9.3:compile
[INFO] |  |  +- org.apache.hive.shims:hive-shims-0.23:jar:2.1.1:runtime
[INFO] |  |  |  \-
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:2.6.1:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-annotations:jar:2.6.1:runtime
[INFO] |  |  |     +-
com.google.inject.extensions:guice-servlet:jar:3.0:runtime
[INFO] |  |  |     +- com.google.inject:guice:jar:3.0:runtime
[INFO] |  |  |     |  +- javax.inject:javax.inject:jar:1:runtime
[INFO] |  |  |     |  \- aopalliance:aopalliance:jar:1.0:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-json:jar:1.9:runtime
[INFO] |  |  |     |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:runtime
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-core-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-mapper-asl:jar:1.8.3:compile
[INFO] |  |  |     |  +-
org.codehaus.jackson:jackson-jaxrs:jar:1.8.3:runtime
[INFO] |  |  |     |  \- org.codehaus.jackson:jackson-xc:jar:1.8.3:runtime
[INFO] |  |  |     +- com.sun.jersey.contribs:jersey-guice:jar:1.9:runtime
[INFO] |  |  |     |  \- com.sun.jersey:jersey-server:jar:1.9:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-common:jar:2.6.1:runtime
[INFO] |  |  |     +- org.apache.hadoop:hadoop-yarn-api:jar:2.6.1:runtime
[INFO] |  |  |     +- javax.xml.bind:jaxb-api:jar:2.2.2:runtime
[INFO] |  |  |     |  \- javax.xml.stream:stax-api:jar:1.0-2:runtime
[INFO] |  |  |     +- org.codehaus.jettison:jettison:jar:1.1:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-core:jar:1.9:runtime
[INFO] |  |  |     +- com.sun.jersey:jersey-client:jar:1.9:runtime
[INFO] |  |  |     +- org.mortbay.jetty:jetty-util:jar:6.1.26:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-server-common:jar:2.6.1:runtime
[INFO] |  |  |     |  \-
org.fusesource.leveldbjni:leveldbjni-all:jar:1.8:runtime
[INFO] |  |  |     +-
org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:2.6.1:runtime
[INFO] |  |  |     \-
org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:2.6.1:runtime
[INFO] |  |  |        \- org.mortbay.jetty:jetty:jar:6.1.26:runtime
[INFO] |  |  \-
org.apache.hive.shims:hive-shims-scheduler:jar:2.1.1:runtime
[INFO] |  +- commons-httpclient:commons-httpclient:jar:3.0.1:compile
[INFO] |  +- org.antlr:antlr-runtime:jar:3.4:compile
[INFO] |  |  +- org.antlr:stringtemplate:jar:3.2.1:compile
[INFO] |  |  \- antlr:antlr:jar:2.7.7:compile
[INFO] |  +- org.antlr:ST4:jar:4.0.4:compile
[INFO] |  +- org.apache.ant:ant:jar:1.9.1:compile
[INFO] |  |  \- org.apache.ant:ant-launcher:jar:1.9.1:compile
[INFO] |  +- org.apache.commons:commons-compress:jar:1.10:compile
[INFO] |  +- org.apache.ivy:ivy:jar:2.4.0:compile
[INFO] |  +- org.apache.curator:curator-framework:jar:2.6.0:compile
[INFO] |  |  \- org.apache.curator:curator-client:jar:2.6.0:compile
[INFO] |  +- org.apache.curator:apache-curator:pom:2.6.0:compile
[INFO] |  +- org.codehaus.groovy:groovy-all:jar:2.4.4:compile
[INFO] |  +- org.apache.calcite:calcite-core:jar:1.6.0:compile
[INFO] |  |  +- org.apache.calcite:calcite-linq4j:jar:1.6.0:compile
[INFO] |  |  +- commons-dbcp:commons-dbcp:jar:1.4:compile
[INFO] |  |  |  \- commons-pool:commons-pool:jar:1.5.4:compile
[INFO] |  |  +- net.hydromatic:aggdesigner-algorithm:jar:6.0:compile
[INFO] |  |  \- org.codehaus.janino:commons-compiler:jar:2.7.6:compile
[INFO] |  +- org.apache.calcite:calcite-avatica:jar:1.6.0:compile
[INFO] |  +- stax:stax-api:jar:1.0.1:compile
[INFO] |  \- jline:jline:jar:2.12:compile
[INFO] +- org.datanucleus:datanucleus-core:jar:4.1.6:compile
[INFO] +- org.datanucleus:datanucleus-api-jdo:jar:4.2.4:compile
[INFO] +- org.datanucleus:javax.jdo:jar:3.2.0-m3:compile
[INFO] |  \- javax.transaction:transaction-api:jar:1.1:compile
[INFO] +- org.datanucleus:datanucleus-rdbms:jar:4.1.9:compile
[INFO] +- hadoop-lzo:hadoop-lzo:jar:0.4.14:compile
[INFO] \- org.apache.flink:flink-runtime-web_2.11:jar:1.9.1:provided
[INFO]    +- org.apache.flink:flink-runtime_2.11:jar:1.9.1:compile
[INFO]    |  +-
org.apache.flink:flink-queryable-state-client-java:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-hadoop-fs:jar:1.9.1:compile
[INFO]    |  +- org.apache.flink:flink-shaded-asm-6:jar:6.2.1-7.0:compile
[INFO]    |  +- com.typesafe.akka:akka-stream_2.11:jar:2.5.21:compile
[INFO]    |  |  +- org.reactivestreams:reactive-streams:jar:1.0.2:compile
[INFO]    |  |  \- com.typesafe:ssl-config-core_2.11:jar:0.3.7:compile
[INFO]    |  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.5.21:compile
[INFO]    |  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.11:compile
[INFO]    |  +- org.clapper:grizzled-slf4j_2.11:jar:1.3.2:compile
[INFO]    |  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
[INFO]    |  +- org.xerial.snappy:snappy-java:jar:1.1.4:compile
[INFO]    |  \- com.twitter:chill_2.11:jar:0.7.6:compile
[INFO]    |     \- com.twitter:chill-java:jar:0.7.6:compile
[INFO]    +-
org.apache.flink:flink-shaded-netty:jar:4.1.32.Final-7.0:compile
[INFO]    +- org.apache.flink:flink-shaded-guava:jar:18.0-7.0:compile
[INFO]    \- org.javassist:javassist:jar:3.19.0-GA:compile
[INFO] ————————————————————————————————————

日志:

2020-02-28 17:17:07,890 INFO
org.apache.hadoop.security.UserGroupInformation               - Login
successful for user ***/dev@***.COM using keytab file /home/***/key.keytab
上面这条是flink日志打印的,从这条日志可以看出 kerberos认证是通过的,能够正常登录,但还是报了以下异常:
2020-02-28 17:17:08,658 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Setting MetaStore object pin classes with
hive.metastore.cache.pinobjtypes="Table,Database,Type,FieldSchema,Order"
2020-02-28 17:17:09,280 INFO
org.apache.hadoop.hive.metastore.MetaStoreDirectSql           - Using
direct SQL, underlying DB is MYSQL
2020-02-28 17:17:09,283 INFO  org.apache.hadoop.hive.metastore.ObjectStore
- Initialized ObjectStore
2020-02-28 17:17:09,450 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
admin role in metastore
2020-02-28 17:17:09,452 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - Added
public role in metastore
2020-02-28 17:17:09,474 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - No user is
added in admin role, since config is empty
2020-02-28 17:17:09,634 INFO
org.apache.flink.table.catalog.hive.HiveCatalog               - Connected
to Hive metastore
2020-02-28 17:17:09,635 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore                - 0:
get_database: ***
2020-02-28 17:17:09,637 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore.audit          - ugi=***
ip=unknown-ip-addr cmd=get_database: ***
2020-02-28 17:17:09,658 INFO  org.apache.hadoop.hive.ql.metadata.HiveUtils
- Adding metastore authorization provider:
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
2020-02-28 17:17:10,166 WARN
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory       - The
short-circuit local reads feature cannot be used because libhadoop cannot
be loaded.
2020-02-28 17:17:10,391 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,397 WARN  org.apache.hadoop.ipc.Client
- Exception encountered while connecting to the server :
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
2020-02-28 17:17:10,398 INFO
org.apache.hadoop.io.retry.RetryInvocationHandler             - Exception
while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over
******.org/***.***.***.***:8020 after 1 fail over attempts. Trying to fail
over immediately.
java.io.IOException: Failed on local exception: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
"***.***.***.org/***.***.***.***"; destination host is: "******.org":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
at org.apache.hadoop.ipc.Client.call(Client.java:1480)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy41.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.
getFileInfo(ClientNamenodeProtocolTranslatorPB.java:776)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy42.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2117)
at
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
at
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
at
org.apache.hadoop.hive.common.FileUtils.getFileStatusOrNull(FileUtils.java:770)
at
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:368)
at
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:343)
at
org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:152)
at
org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadDatabase(AuthorizationPreEventListener.java:204)
at
org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:152)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2153)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_database(HiveMetaStore.java:932)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy35.get_database(Unknown Source)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1280)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:150)
at com.sun.proxy.$Proxy36.getDatabase(Unknown Source)
at org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.
getDatabase(HiveMetastoreClientWrapper.java:102)
at
org.apache.flink.table.catalog.hive.HiveCatalog.databaseExists(HiveCatalog.java:347)
at
org.apache.flink.table.catalog.hive.HiveCatalog.open(HiveCatalog.java:244)
at
org.apache.flink.table.catalog.CatalogManager.registerCatalog(CatalogManager.java:153)
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.registerCatalog(TableEnvironmentImpl.java:170)

……在这段省略的代码里做了UserGroupInformation.loginUserFromKeytab(principal,keytab);并成功通过认证
at this is my code.main(MyMainClass.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
at
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
at
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:122)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:227)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
at
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: java.io.IOException:
org.apache.hadoop.security.AccessControlException: Client cannot
authenticate via:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:651)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:738)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
... 67 more
Caused by: org.apache.hadoop.security.AccessControlException: Client
cannot authenticate via:[TOKEN, KERBEROS]
at
org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:172)
at
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
at
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:561)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:376)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:730)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:726)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:726)
... 70 more
目前诊断看起来像是jar被污染导致。麻烦请指点一二。谢谢!

叶贤勋
yxx_cmhd@163.com

<https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1&name=%E5%8F%B6%E8%B4%A4%E5%8B%8B&uid=yxx_cmhd%40163.com&iconUrl=https%3A%2F%2Fmail-online.nosdn.127.net%2Fsmd968c697b9ad8ea62cb19d8a7846a999.jpg&items=%5B%22yxx_cmhd%40163.com%22%5D>
签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制

在2020年02月28日 15:16,Rui Li<li...@apache.org> <li...@apache.org> 写道:

Hi 叶贤勋,


我手头上没有kerberos的环境,从TokenCache的代码(2.7.5版本)看起来,这个异常可能是因为没有正确拿到RM的地址或者principal。请检查一下下面这几个配置:
mapreduce.framework.name
yarn.resourcemanager.address
yarn.resourcemanager.principal
以及你的flink的作业是否能读到这些配置

On Fri, Feb 28, 2020 at 11:10 AM Kurt Young <yk...@gmail.com> wrote:

cc @lirui@apache.org <li...@apache.org>

Best,
Kurt


On Thu, Feb 13, 2020 at 10:22 AM 叶贤勋 <yx...@163.com> wrote:

Hi 大家好:
在做hive2.1.1 source带Kerberos认证有个异常请教下大家。
flink 版本1.9
hive 版本2.1.1,实现了HiveShimV211。
代码:
public class HiveCatalogTest {
private static final Logger LOG =
LoggerFactory.getLogger(HiveCatalogTest.class);
private String hiveConfDir = "/Users/yexianxun/dev/env/test-hive"; //
a local path
private TableEnvironment tableEnv;
private HiveCatalog hive;
private String hiveName;
private String hiveDB;
private String version;


@Before
public void before() {
EnvironmentSettings settings =
EnvironmentSettings.newInstance()
.useBlinkPlanner()
.inBatchMode()
.build();
tableEnv = TableEnvironment.create(settings);
hiveName = "myhive";
hiveDB = "sloth";
version = "2.1.1";
}


@Test
public void testCatalogQuerySink() throws Exception {
hive = new HiveCatalog(hiveName, hiveDB, hiveConfDir, version);
System.setProperty("java.security.krb5.conf", hiveConfDir +
"/krb5.conf");
tableEnv.getConfig().getConfiguration().setString("stream_mode",
"false");
tableEnv.registerCatalog(hiveName, hive);
tableEnv.useCatalog(hiveName);
String query = "select * from " + hiveName + "." + hiveDB +
".testtbl2 where id = 20200202";
Table table = tableEnv.sqlQuery(query);
String newTableName = "testtbl2_1";
table.insertInto(hiveName, hiveDB, newTableName);
tableEnv.execute("test");
}
}


HiveMetastoreClientFactory:
public static HiveMetastoreClientWrapper create(HiveConf hiveConf,
String hiveVersion) {
Preconditions.checkNotNull(hiveVersion, "Hive version cannot be
null");
if (System.getProperty("java.security.krb5.conf") != null) {
if (System.getProperty("had_set_kerberos") == null) {
String principal = "sloth/dev@BDMS.163.COM";
String keytab =
"/Users/yexianxun/dev/env/mammut-test-hive/sloth.keytab";
try {
sun.security.krb5.Config.refresh();
UserGroupInformation.setConfiguration(hiveConf);
UserGroupInformation.loginUserFromKeytab(principal,
keytab);
System.setProperty("had_set_kerberos", "true");
} catch (Exception e) {
LOG.error("", e);
}
}
}
return new HiveMetastoreClientWrapper(hiveConf, hiveVersion);
}


HiveCatalog:
private static HiveConf createHiveConf(@Nullable String hiveConfDir) {
LOG.info("Setting hive conf dir as {}", hiveConfDir);
try {
HiveConf.setHiveSiteLocation(
hiveConfDir == null ?
null : Paths.get(hiveConfDir,
"hive-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(
String.format("Failed to get hive-site.xml from %s",
hiveConfDir), e);
}


// create HiveConf from hadoop configuration
HiveConf hiveConf = new
HiveConf(HadoopUtils.getHadoopConfiguration(new
org.apache.flink.configuration.Configuration()),
HiveConf.class);
try {
hiveConf.addResource(Paths.get(hiveConfDir,
"hdfs-site.xml").toUri().toURL());
hiveConf.addResource(Paths.get(hiveConfDir,
"core-site.xml").toUri().toURL());
} catch (MalformedURLException e) {
throw new CatalogException(String.format("Failed to get
hdfs|core-site.xml from %s", hiveConfDir), e);
}
return hiveConf;
}


在执行testCatalogQuerySink方法报以下错误:
org.apache.flink.runtime.client.JobExecutionException: Could not retrieve
JobResult.


at

org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:622)
at

org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:117)
at

org.apache.flink.table.planner.delegation.BatchExecutor.execute(BatchExecutor.java:55)
at

org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:410)
at api.HiveCatalogTest.testCatalogQuerySink(HiveCatalogMumTest.java:234)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at

org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at

org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at

org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at

org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at

org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at

org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at

org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at

com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at

com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at

com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed
to submit job.
at

org.apache.flink.runtime.dispatcher.Dispatcher.lambda$internalSubmitJob$2(Dispatcher.java:333)
at

java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at

java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at

java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at

akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at

akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at

akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.RuntimeException:
org.apache.flink.runtime.client.JobExecutionException: Could not set up
JobManager
at

org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:36)
at

java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
... 6 more
Caused by: org.apache.flink.runtime.client.JobExecutionException: Could
not set up JobManager
at

org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:152)
at

org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:83)
at

org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:375)
at

org.apache.flink.util.function.CheckedSupplier.lambda$unchecked$0(CheckedSupplier.java:34)
... 7 more
Caused by: org.apache.flink.runtime.JobException: Creating the input
splits caused an error: Can't get Master Kerberos principal for use as
renewer
at

org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:270)
at

org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:907)
at

org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:230)
at

org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:106)
at

org.apache.flink.runtime.scheduler.LegacyScheduler.createExecutionGraph(LegacyScheduler.java:207)
at

org.apache.flink.runtime.scheduler.LegacyScheduler.createAndRestoreExecutionGraph(LegacyScheduler.java:184)
at

org.apache.flink.runtime.scheduler.LegacyScheduler.<init>(LegacyScheduler.java:176)
at

org.apache.flink.runtime.scheduler.LegacySchedulerFactory.createInstance(LegacySchedulerFactory.java:70)
at

org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:278)
at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:266)
at

org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:98)
at

org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:40)
at

org.apache.flink.runtime.jobmaster.JobManagerRunner.<init>(JobManagerRunner.java:146)
... 10 more
Caused by: java.io.IOException: Can't get Master Kerberos principal for
use as renewer
at

org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
at

org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at

org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at

org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
at

org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at

org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:159)
at

org.apache.flink.connectors.hive.HiveTableInputFormat.createInputSplits(HiveTableInputFormat.java:63)
at

org.apache.flink.runtime.executiongraph.ExecutionJobVertex.<init>(ExecutionJobVertex.java:256)
... 22 more


测试sink的方法是能够正常插入数据,但是在hive source时报这个错误,感觉是获取deleg
token时返回空导致的。不知道具体应该怎么解决





| |
叶贤勋
|
|
yxx_cmhd@163.com
|
签名由网易邮箱大师定制