You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kylin.apache.org by Billy Liu <bi...@apache.org> on 2017/03/01 09:43:06 UTC

Re: build cube failed in 3 step,Step Name: Extract Fact Table Distinct Columns

Hi Wangxg,

As the log said :"Caused by: MetaException(message:Can not create filePath:
/user/hive/warehousePermission denied. user=dataprocess_user is not the
owner of inode=warehouse"

Kylin user(dataprocess_user) will create intermediate table into Hive, but
the user does not have enough permission.

You could have a try, if this user(dataprocess_user) could log in Hive, and
create some tables.

2017-02-28 10:23 GMT+08:00 wangxg <63...@qq.com>:

> Hi,
> help needed!!
> In our cluster have a  admin user ,when use this use to build cube ,it is
> Success。 But for security reason we do not want to build cube use admin
> user, then  we create a new user for kylin named dataprocess_user。 Then use
> the new user to build cube ,it is failed 。Then we give  dataprocess_user
> permission to operate the hdfs ,hive and hbase ,it is still failed。  i do
> not kow what permission   needed to build cube。
>
> *the error info just as the flow :*
> java.lang.RuntimeException: java.io.IOException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiv
> eMetaStoreClient
>         at
> org.apache.kylin.source.hive.HiveMRInput$HiveTableInputForma
> t.configureJob(HiveMRInput.java:94)
>         at
> org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.setu
> pMapper(FactDistinctColumnsJob.java:119)
>         at
> org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.run(
> FactDistinctColumnsJob.java:103)
>         at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:88)
>         at
> org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork
> (MapReduceExecutable.java:120)
>         at
> org.apache.kylin.job.execution.AbstractExecutable.execute(
> AbstractExecutable.java:113)
>         at
> org.apache.kylin.job.execution.DefaultChainedExecutable.doWo
> rk(DefaultChainedExecutable.java:57)
>         at
> org.apache.kylin.job.execution.AbstractExecutable.execute(
> AbstractExecutable.java:113)
>         at
> org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRun
> ner.run(DefaultScheduler.java:136)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1142)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException:
> com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiv
> eMetaStoreClient
>         at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(
> HCatInputFormat.java:97)
>         at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(
> HCatInputFormat.java:51)
>         at
> org.apache.kylin.source.hive.HiveMRInput$HiveTableInputForma
> t.configureJob(HiveMRInput.java:89)
>         ... 11 more
> Caused by: com.google.common.util.concurrent.UncheckedExecutionException:
> java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiv
> eMetaStoreClient
>         at com.google.common.cache.LocalCache$Segment.get(LocalCache.
> java:2263)
>         at com.google.common.cache.LocalCache.get(LocalCache.java:4000)
>         at
> com.google.common.cache.LocalCache$LocalManualCache.get(
> LocalCache.java:4789)
>         at
> org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(
> HiveClientCache.java:227)
>         at
> org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClie
> ntCache.java:202)
>         at
> org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreCli
> ent(HCatUtil.java:558)
>         at
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJ
> obInfo(InitializeInput.java:104)
>         at
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(
> InitializeInput.java:86)
>         at
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(
> HCatInputFormat.java:95)
>         ... 13 more
> Caused by: java.lang.RuntimeException: Unable to instantiate
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiv
> eMetaStoreClient
>         at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(
> MetaStoreUtils.java:1532)
>         at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<
> init>(RetryingMetaStoreClient.java:87)
>         at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.get
> Proxy(RetryingMetaStoreClient.java:133)
>         at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.get
> Proxy(RetryingMetaStoreClient.java:119)
>         at
> org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveC
> lientCache.java:230)
>         at
> org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveC
> lientCache.java:227)
>         at
> com.google.common.cache.LocalCache$LocalManualCache$1.load(
> LocalCache.java:4792)
>         at
> com.google.common.cache.LocalCache$LoadingValueReference.loa
> dFuture(LocalCache.java:3599)
>         at
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
>         at
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(
> LocalCache.java:2342)
>         at com.google.common.cache.LocalCache$Segment.get(LocalCache.
> java:2257)
>         ... 21 more
> Caused by: java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
> ConstructorAccessorImpl.java:62)
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
> legatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at
> org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(
> MetaStoreUtils.java:1530)
>         ... 31 more
> Caused by: MetaException(message:Can not create filePath:
> /user/hive/warehousePermission denied. user=dataprocess_user is not the
> owner of inode=warehouse
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.c
> heckOwner(FSPermissionChecker.java:268)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.c
> heckPermission(FSPermissionChecker.java:245)
>         at
> org.apache.hadoop.hdfs.server.namenode.HWINodeAttributeProvi
> der$HWAccessControlEnforce.checkPermission(HWINodeAttributeP
> rovider.java:96)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.c
> heckPermission(FSPermissionChecker.java:190)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPerm
> ission(FSDirectory.java:1730)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPerm
> ission(FSDirectory.java:1714)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwne
> r(FSDirectory.java:1683)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermis
> sion(FSDirAttrOp.java:60)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermi
> ssion(FSNamesystem.java:1652)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.set
> Permission(NameNodeRpcServer.java:808)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServ
> erSideTranslatorPB.setPermission(ClientNamenodePr
> otocolServerSideTranslatorPB.java:469)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocol
> Protos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNam
> enodeProtocolProtos.java)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcIn
> voker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:973)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2143)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2139)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> upInformation.java:1710)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2137)
> )
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.
> createHdfsPath(HiveMetaStore.java:590)
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.
> init(HiveMetaStore.java:489)
>         at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(
> RetryingHMSHandler.java:78)
>         at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy
> (RetryingHMSHandler.java:84)
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHM
> SHandler(HiveMetaStore.java:6903)
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(
> HiveMetaStoreClient.java:212)
>         at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(
> HiveMetaStoreClient.java:194)
>         at
> org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiv
> eMetaStoreClient.<init>(HiveClientCache.java:330)
>         ... 36 more
>
> --
> View this message in context: http://apache-kylin.74782.x6.n
> abble.com/build-cube-failed-in-3-step-Step-Name-Extract-Fact
> -Table-Distinct-Columns-tp7307.html
> Sent from the Apache Kylin mailing list archive at Nabble.com.
>