You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "JackYan (Jira)" <ji...@apache.org> on 2021/09/27 09:07:00 UTC

[jira] [Created] (HIVE-25562) IOException on OrcRecordWriter.close when using Erasure Coding Policy XOR-2-1-1024k

JackYan created HIVE-25562:
------------------------------

             Summary: IOException on OrcRecordWriter.close when using Erasure Coding Policy XOR-2-1-1024k
                 Key: HIVE-25562
                 URL: https://issues.apache.org/jira/browse/HIVE-25562
             Project: Hive
          Issue Type: Bug
    Affects Versions: 3.1.2
         Environment: hadoop version: 3.1.3
hive version: 3.1.2
hdfs: 4 datanode(s)
output dir ec policy: XOR-2-1-1024k
            Reporter: JackYan


I have a hadoop cluster with 4 datanodes, I create a table `test`.`relation_message_orc`  stored as orc , and then I set EC policy XOR-2-1-1024k on the table's directory. Then, I insert some data into the table use a hivesql like this:
  INSERT OVERWRITE TABLE test.relation_message_orc select * from test.relation_message_text;
Unfortunately,I get error message like this:
  Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:247)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapRunner.run(ExecMapRunner.java:37)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/data/test/relation_message_orc/.hive-staging_hive_2021-09-27_16-35-21_242_8445805845312286067-1/_task_tmp.-ext-10002/_tmp.000000_3 could only be written to 1 of the 2 required nodes for XOR-2-1-1024k. There are 4 datanode(s) running and 4 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2212)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)

        at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:217)
        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:1291)
        at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:733)
        at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757)
        at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757)
        at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:757)
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:229)
        ... 9 more
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hive/data/test/relation_message_orc/.hive-staging_hive_2021-09-27_16-35-21_242_8445805845312286067-1/_task_tmp.-ext-10002/_tmp.000000_3 could only be written to 1 of the 2 required nodes for XOR-2-1-1024k. There are 4 datanode(s) running and 4 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2212)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2731)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:568)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916)

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1545)
        at org.apache.hadoop.ipc.Client.call(Client.java:1491)
        at org.apache.hadoop.ipc.Client.call(Client.java:1388)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
        at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:514)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1081)
        at org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:480)
        at org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:526)
        at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
        at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
        at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
        at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:136)
        at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
        at java.io.DataOutputStream.write(DataOutputStream.java:107)
        at org.apache.orc.impl.PhysicalFsWriter$BufferedStream.spillToDiskAndClear(PhysicalFsWriter.java:322)
        at org.apache.orc.impl.PhysicalFsWriter.finalizeStripe(PhysicalFsWriter.java:373)
        at org.apache.orc.impl.WriterImpl.flushStripe(WriterImpl.java:466)
        at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:580)
        at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:335)
        at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.close(OrcOutputFormat.java:114)
        at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:214)
        ... 15 more

I have no idear to resolve it, maybe it's a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)