You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@pig.apache.org by mingda li <li...@gmail.com> on 2016/12/07 05:01:09 UTC
File could only be replicated to 0 nodes, instead of 1
Hi,
I am running a multiple join of 100G TPC-DS data with bad order on our
cluster. And each time, it returns such log file to me with the exception:
Has anyone ever met it? Is it caused by too much data more than disk space?
* org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/tmp/temp-1180529634/tmp-491747926/_temporary/_attempt_201607142217_0115_r_000000_0/part-r-00000
could only be repl icated to 0 nodes, instead of 1*
*at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)*
* 5 at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)*
* 6 at sun.reflect.GeneratedMethodAccessor851.invoke(Unknown
Source)*
* 7 at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
* 8 at java.lang.reflect.Method.invoke(Method.java:606)*
* 9 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)*
* 10 at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)*
* 11 at
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)*
* 12 at java.security.AccessController.doPrivileged(Native Method)*
*.....*
*Pig Stack Trace*
* 32 ---------------*
* 33 ERROR 1066: Unable to open iterator for alias limit_data*
* 34 *
* 35 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
to open iterator for alias limit_data*
* 36 at org.apache.pig.PigServer.openIterator(PigServer.java:935)*
* 37 at
org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)*
* 38 at
org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)*
*....*
More detailed information is in attachment.
FW: File could only be replicated to 0 nodes, instead of 1
Posted by "Zhang, Liyun" <li...@intel.com>.
Hi:
You can google “File could only be replicated to 0 nodes, instead of 1” there are several reasons for it. In most case, it is because of lack of disk space and all datanodes die.
Best Regards
Kelly Zhang/Zhang,Liyun
From: mingda li [mailto:limingda1993@gmail.com]
Sent: Wednesday, December 7, 2016 1:01 PM
To: dev@pig.apache.org; user@pig.apache.org
Subject: File could only be replicated to 0 nodes, instead of 1
Hi,
I am running a multiple join of 100G TPC-DS data with bad order on our cluster. And each time, it returns such log file to me with the exception: Has anyone ever met it? Is it caused by too much data more than disk space?
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /tmp/temp-1180529634/tmp-491747926/_temporary/_attempt_201607142217_0115_r_000000_0/part-r-00000 could only be repl icated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
5 at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
6 at sun.reflect.GeneratedMethodAccessor851.invoke(Unknown Source)
7 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
8 at java.lang.reflect.Method.invoke(Method.java:606)
9 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
10 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
11 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
12 at java.security.AccessController.doPrivileged(Native Method)
.....
Pig Stack Trace
32 ---------------
33 ERROR 1066: Unable to open iterator for alias limit_data
34
35 org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias limit_data
36 at org.apache.pig.PigServer.openIterator(PigServer.java:935)
37 at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
38 at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
....
More detailed information is in attachment.