You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues-all@impala.apache.org by "Andrew Sherman (Jira)" <ji...@apache.org> on 2021/11/11 18:07:00 UTC

[jira] [Assigned] (IMPALA-11016) load_nested fails with Hive exception in BlockManager.chooseTarget4NewBlock running 'CREATE EXTERNAL TABLE region ...'

     [ https://issues.apache.org/jira/browse/IMPALA-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Sherman reassigned IMPALA-11016:
---------------------------------------

    Assignee: Laszlo Gaal

> load_nested fails with Hive exception in BlockManager.chooseTarget4NewBlock running 'CREATE EXTERNAL TABLE region ...' 
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: IMPALA-11016
>                 URL: https://issues.apache.org/jira/browse/IMPALA-11016
>             Project: IMPALA
>          Issue Type: Bug
>            Reporter: Andrew Sherman
>            Assignee: Laszlo Gaal
>            Priority: Critical
>
> Failure is
> {code}
> 2021-11-10 13:42:55,781 INFO:load_nested[348]:Executing: 
>       CREATE EXTERNAL TABLE region
>       STORED AS parquet
>       TBLPROPERTIES('parquet.compression' = 'SNAPPY','external.table.purge'='TRUE')
>       AS SELECT * FROM tmp_region
> Traceback (most recent call last):
>   File "/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/testdata/bin/load_nested.py", line 415, in <module>
>     load()
>   File "/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/testdata/bin/load_nested.py", line 349, in load
>     hive.execute(stmt)
>   File "/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/tests/comparison/db_connection.py", line 206, in execute
>     return self._cursor.execute(sql, *args, **kwargs)
>   File "/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/infra/python/env-gcc7.5.0/lib/python2.7/site-packages/impala/hiveserver2.py", line 331, in execute
>     self._wait_to_finish()  # make execute synchronous
>   File "/data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/infra/python/env-gcc7.5.0/lib/python2.7/site-packages/impala/hiveserver2.py", line 412, in _wait_to_finish
>     raise OperationalError(resp.errorMessage)
> impala.error.OperationalError: Error while compiling statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
> ERROR in /data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/testdata/bin/create-load-data.sh at line 48:
> Generated: /data/jenkins/workspace/impala-cdh-7.1-maint-exhaustive-release/repos/Impala/logs/extra_junit_xml_logs/generate_junitxml.buildall.create-load-data.20211110_21_43_04.xml
> + echo 'buildall.sh ' -release -format '-testdata failed.'
> buildall.sh  -release -format -testdata failed.
> {code}
> hive-server2.log shows
> {code}
> 2021-11-10T13:43:03,381 ERROR [HiveServer2-Background-Pool: Thread-18405] tez.TezTask: Failed to execute tez graph.
> org.apache.tez.dag.api.TezUncheckedException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/jenkins/_tez_session_dir/b191f8aa-6c28-447c-b246-1d7e38c0b3e0/.tez/application_1636579095895_0038/recovery/1/summary could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
> 	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2280)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2827)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:874)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)
> 	at org.apache.tez.dag.app.DAGAppMaster.startDAG(DAGAppMaster.java:2545)
> 	at org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1364)
> 	at org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:145)
> 	at org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:184)
> 	at org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7648)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/jenkins/_tez_session_dir/b191f8aa-6c28-447c-b246-1d7e38c0b3e0/.tez/application_1636579095895_0038/recovery/1/summary could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
> 	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2280)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2827)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:874)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)
> 	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1508)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> 	at com.sun.proxy.$Proxy12.addBlock(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:523)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
> 	at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1116)
> 	at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1880)
> 	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1682)
> 	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_144]
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_144]
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_144]
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_144]
> 	at org.apache.tez.common.RPCUtil.instantiateException(RPCUtil.java:53) ~[tez-api-0.9.1.7.1.8.0-370.jar:0.9.1.7.1.8.0-370]
> 	at org.apache.tez.common.RPCUtil.instantiateRuntimeException(RPCUtil.java:85) ~[tez-api-0.9.1.7.1.8.0-370.jar:0.9.1.7.1.8.0-370]
> 	at org.apache.tez.common.RPCUtil.unwrapAndThrowException(RPCUtil.java:135) ~[tez-api-0.9.1.7.1.8.0-370.jar:0.9.1.7.1.8.0-370]
> 	at org.apache.tez.client.FrameworkClient.submitDag(FrameworkClient.java:148) ~[tez-api-0.9.1.7.1.8.0-370.jar:0.9.1.7.1.8.0-370]
> 	at org.apache.tez.client.TezClient.submitDAGSession(TezClient.java:687) ~[tez-api-0.9.1.7.1.8.0-370.jar:0.9.1.7.1.8.0-370]
> 	at org.apache.tez.client.TezClient.submitDAG(TezClient.java:593) ~[tez-api-0.9.1.7.1.8.0-370.jar:0.9.1.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.exec.tez.TezTask.submit(TezTask.java:603) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:237) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:357) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:330) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:246) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:109) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:742) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:497) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:491) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:166) ~[hive-exec-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:226) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:88) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:327) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_144]
> 	at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_144]
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) ~[hadoop-common-3.1.1.7.1.8.0-370.jar:?]
> 	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:345) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_144]
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144]
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_144]
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144]
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_144]
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144]
> 	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
> Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/jenkins/_tez_session_dir/b191f8aa-6c28-447c-b246-1d7e38c0b3e0/.tez/application_1636579095895_0038/recovery/1/summary could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
> 	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2280)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2827)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:874)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)
> 	at org.apache.tez.dag.app.DAGAppMaster.startDAG(DAGAppMaster.java:2545)
> 	at org.apache.tez.dag.app.DAGAppMaster.submitDAGToAppMaster(DAGAppMaster.java:1364)
> 	at org.apache.tez.dag.api.client.DAGClientHandler.submitDAG(DAGClientHandler.java:145)
> 	at org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolBlockingPBServerImpl.submitDAG(DAGClientAMProtocolBlockingPBServerImpl.java:184)
> 	at org.apache.tez.dag.api.client.rpc.DAGClientAMProtocolRPC$DAGClientAMProtocol$2.callBlockingMethod(DAGClientAMProtocolRPC.java:7648)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hive/jenkins/_tez_session_dir/b191f8aa-6c28-447c-b246-1d7e38c0b3e0/.tez/application_1636579095895_0038/recovery/1/summary could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
> 	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2280)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2827)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:874)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:589)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:533)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:989)
> 	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:917)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2894)
> 	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1508)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> 	at com.sun.proxy.$Proxy12.addBlock(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:523)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:431)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
> 	at com.sun.proxy.$Proxy13.addBlock(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1116)
> 	at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1880)
> 	at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1682)
> 	at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> 	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562) ~[hadoop-common-3.1.1.7.1.8.0-370.jar:?]
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1508) ~[hadoop-common-3.1.1.7.1.8.0-370.jar:?]
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1405) ~[hadoop-common-3.1.1.7.1.8.0-370.jar:?]
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233) ~[hadoop-common-3.1.1.7.1.8.0-370.jar:?]
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) ~[hadoop-common-3.1.1.7.1.8.0-370.jar:?]
> 	at com.sun.proxy.$Proxy79.submitDAG(Unknown Source) ~[?:?]
> 	at org.apache.tez.client.FrameworkClient.submitDag(FrameworkClient.java:141) ~[tez-api-0.9.1.7.1.8.0-370.jar:0.9.1.7.1.8.0-370]
> 	... 28 more
> 2021-11-10T13:43:03,383  INFO [HiveServer2-Background-Pool: Thread-18405] reexec.ReOptimizePlugin: ReOptimization: retryPossible: false
> 2021-11-10T13:43:03,383 ERROR [HiveServer2-Background-Pool: Thread-18405] ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
> 2021-11-10T13:43:03,383  INFO [HiveServer2-Background-Pool: Thread-18405] ql.Driver: Completed executing command(queryId=jenkins_20211110134255_3406587a-733d-4963-b6c6-300c1a3408ff); Time taken: 7.456 seconds
> 2021-11-10T13:43:03,383  INFO [HiveServer2-Background-Pool: Thread-18405] ql.Driver: OK
> 2021-11-10T13:43:03,383  INFO [HiveServer2-Background-Pool: Thread-18405] lockmgr.DbTxnManager: Stopped heartbeat for query: jenkins_20211110134255_3406587a-733d-4963-b6c6-300c1a3408ff
> 2021-11-10T13:43:03,494  INFO [HiveServer2-Background-Pool: Thread-18405] common.LogUtils: Unregistered logging context.
> 2021-11-10T13:43:03,494 ERROR [HiveServer2-Background-Pool: Thread-18405] operation.Operation: Error running hive query: 
> org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
> 	at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:362) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:88) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:327) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_144]
> 	at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_144]
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1898) ~[hadoop-common-3.1.1.7.1.8.0-370.jar:?]
> 	at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:345) ~[hive-service-3.1.3000.7.1.8.0-370.jar:3.1.3000.7.1.8.0-370]
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_144]
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144]
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_144]
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144]
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_144]
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144]
> 	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
> 2021-11-10T13:43:03,495  INFO [2043339c-21df-43b8-81e3-99bd5a3363d0 HiveServer2-Handler-Pool: Thread-4929] operation.OperationManager: Closing operation: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=dd0b9651-0493-4b48-ba29-f3465d0cd0e2]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscribe@impala.apache.org
For additional commands, e-mail: issues-all-help@impala.apache.org