You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/07/06 08:40:00 UTC

[jira] [Commented] (KYLIN-4605) HiveProducer write metrics less when one host running some kylin servers

    [ https://issues.apache.org/jira/browse/KYLIN-4605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17151877#comment-17151877 ] 

ASF GitHub Bot commented on KYLIN-4605:
---------------------------------------

bigxiaochu closed pull request #1290:
URL: https://github.com/apache/kylin/pull/1290


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


> HiveProducer write metrics less when one host running some kylin servers
> ------------------------------------------------------------------------
>
>                 Key: KYLIN-4605
>                 URL: https://issues.apache.org/jira/browse/KYLIN-4605
>             Project: Kylin
>          Issue Type: Improvement
>          Components: Metrics
>    Affects Versions: v3.0.0-alpha, v3.0.0-beta
>            Reporter: chuxiao
>            Assignee: chuxiao
>            Priority: Major
>
> Fails to write metrics to file  becase file lease:
> hdfs://xxxx/kday_date=2020-01-14/bigdata-kylin-shuyi3-00.gz01.diditaxi.com-part-0000 due to org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to APPEND_FILE /user/prod_kylin/bigdata_kylin/hive/bigdata_kylin/hive_metrics_query_prod3/kday_date=2020-01-14/bigdata-kylin-shuyi3-00.gz01.diditaxi.com-part-0000 for DFSClient_NONMAPREDUCE_-1428991477_29 on 100.69.76.32 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-312505276_29 on 100.69.76.32
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2700)
> at org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:118)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2735)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:842)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:493)
> at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2717)
> kylin-4026已解决这个问题。启动时加了个时间戳,然后outputstream一直不关,写一天,代替append。
> 由于同一台机器的进程不可能同毫秒初始化HiveProducer,所以必然不会冲突



--
This message was sent by Atlassian Jira
(v8.3.4#803005)