You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@oozie.apache.org by "Attila Sasvari (JIRA)" <ji...@apache.org> on 2017/02/07 10:11:41 UTC

[jira] [Created] (OOZIE-2791) ShareLib installation may fail on busy Hadoop clusters

Attila Sasvari created OOZIE-2791:
-------------------------------------

             Summary: ShareLib installation may fail on busy Hadoop clusters
                 Key: OOZIE-2791
                 URL: https://issues.apache.org/jira/browse/OOZIE-2791
             Project: Oozie
          Issue Type: Bug
            Reporter: Attila Sasvari


On a busy Hadoop cluster it can happen that users cannot install properly  
Oozie ShareLib.

Example on a Hadoop 2.4.0 pseudo cluster sharelib installion with a  concurrency number set high (to simulate a busy cluster):
{code}
oozie-setup.sh sharelib create -fs hdfs://localhost:9000 -locallib oozie-sharelib-*.tar.gz -concurrency 150
{code}

You can see a lot of errors (failed copy tasks) on the output:
{code}
Running 464 copy tasks on 150 threads

Error: Copy task failed with exception

Stack trace for the error was (for debug purposes):
--------------------------------------
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/asasvari/share/lib/lib_20170207105926/distcp/hadoop-distcp-2.4.0.jar could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and no node(s) are excluded in this operation.
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1430)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2684)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:584)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:440)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

	at org.apache.hadoop.ipc.Client.call(Client.java:1410)
	at org.apache.hadoop.ipc.Client.call(Client.java:1363)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
	at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
	at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:361)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1439)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1261)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:525)
--------------------------------------
...
{code}

You can see file is created but it's size is 0.
{code}
-rw-r--r--   3 asasvari supergroup          0 2017-02-07 10:59 share/lib/lib_20170207105926/distcp/hadoop-distcp-2.4.0.jar
{code}

This behaviour is clearly wrong. 

In case of such an exception, we should retry copying or rollback changes. We should also consider throttling HDFS requests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)