You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "xl_zl (Jira)" <ji...@apache.org> on 2019/12/04 13:16:00 UTC

[jira] [Created] (KYLIN-4278) Build cube error in step 3. Connected to metastore, then MetaStoreClient lost connection

xl_zl created KYLIN-4278:
----------------------------

             Summary: Build cube error in step 3. Connected to metastore, then MetaStoreClient lost connection
                 Key: KYLIN-4278
                 URL: https://issues.apache.org/jira/browse/KYLIN-4278
             Project: Kylin
          Issue Type: Bug
          Components: Metadata, Security
    Affects Versions: v3.0.0-beta
         Environment: hadoop 3.0
hiveserver2
hive metastore
beeline
            Reporter: xl_zl
             Fix For: Future


When i build a  cube,  I encounter a strange issue in step 3(name=Extract Fact Table Distinct Columns). Kylin connect to hive metastore and want to get metadata,but  metastore server throw exception:{color:#FF0000}Error occurred during processing of message. | {color}
{color:#FF0000}java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128{color}

{color:#172b4d}==============metastore-server  error logs:==============={color}

{color:#172b4d}2019-12-04 17:50:10,180 | ERROR | pool-10-thread-173 | Error occurred during processing of message. | 2019-12-04 17:50:10,180 | ERROR | pool-10-thread-173 | Error occurred during processing of message. | java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:694) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:691) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_212] at javax.security.auth.Subject.doAs(Subject.java:360) ~[?:1.8.0_212] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) ~[hadoop-common-3.1.1-mrs-2.0.jar:?] at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:691) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] ... 10 more2019-12-04 17:50:10,399 | ERROR | pool-10-thread-173 | Error occurred during processing of message. | java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:694) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:691) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_212] at javax.security.auth.Subject.doAs(Subject.java:360) ~[?:1.8.0_212] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709) ~[hadoop-common-3.1.1-mrs-2.0.jar:?] at org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:691) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ~[hive-exec-3.1.0-mrs-2.0.jar:3.1.0-mrs-2.0] ... 10 more{color}

 

 

================kylin error logs:=================

2019-12-04 20:07:11,516 WARN  [Scheduler 1039145475 Job c2cd7ddc-689a-864d-640b-1d73b18eb42f-113] metastore.HiveMetaStoreClient:645 : set_ugi() not successful, Likely cause: new client talking to old server. Continuing without it.2019-12-04 20:07:11,516 WARN  [Scheduler 1039145475 Job c2cd7ddc-689a-864d-640b-1d73b18eb42f-113] metastore.HiveMetaStoreClient:645 : set_ugi() not successful, Likely cause: new client talking to old server. Continuing without it.org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:380) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:4814) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:4800) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:637) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:228) at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.<init>(HiveClientCache.java:409) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133) at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:297) at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:292) at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) at com.google.common.cache.LocalCache.get(LocalCache.java:3965) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764) at org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:292) at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:274) at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:569) at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104) at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:88) at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95) at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51) at org.apache.kylin.source.hive.HiveMRInput$HiveTableInputFormat.configureJob(HiveMRInput.java:80) at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:126) at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:104) at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:131) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:179) at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:179) at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)2019-12-04 20:07:11,517 INFO  [Scheduler 1039145475 Job c2cd7ddc-689a-864d-640b-1d73b18eb42f-113] metastore.HiveMetaStoreClient:673 : Connected to metastore.2019-12-04 20:07:11,517 INFO  [Scheduler 1039145475 Job c2cd7ddc-689a-864d-640b-1d73b18eb42f-113] metastore.RetryingMetaStoreClient:97 : RetryingMetaStoreClient proxy=class org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient ugi=fiboadmin@0B019988_A1B0_4F31_AE11_6299A85F88FF.COM (auth:KERBEROS) retries=1 delay=1 lifetime=02019-12-04 20:07:11,532 WARN  [Scheduler 1039145475 Job c2cd7ddc-689a-864d-640b-1d73b18eb42f-113]{color:#de350b} metastore.RetryingMetaStoreClient:257 : MetaStoreClient lost connection. Attempting to reconnect (1 of 1) after 1s. getTable{color}

org.apache.thrift.transport.TTransportException at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table_req(ThriftHiveMetastore.java:2083) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table_req(ThriftHiveMetastore.java:2070) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1686) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:1678) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:208) at com.sun.proxy.$Proxy69.getTable(Unknown Source) at org.apache.hive.hcatalog.common.HCatUtil.getTable(HCatUtil.java:191) at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:105) at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:88) at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95) at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51) at org.apache.kylin.source.hive.HiveMRInput$HiveTableInputFormat.configureJob(HiveMRInput.java:80) at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:126) at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:104) at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:131) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:179) at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71) at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:179) at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)