You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Jianwu Wang <ji...@sdsc.edu> on 2011/02/22 23:21:51 UTC
DFSClient Exception for closing file
Hi there,
I'm using hadoop 0.21.0 in fully-distributed mode in a cluster. My
jobs only use maps which will call a certain local executables. So there
are no output for map on HDFS. All other jobs work fine exception one.
The exception message is as follows. I guess one possible reason is that
this job might take longer time than others. Does any one know why it
happens? Are there any HDFS/MapReduce configuration item I have to set
to avoid this error?
Thank you very much?
attempt_201102221033_0025_m_000000_0/syslog
::::::::::::::
2011-02-22 11:43:56,123 INFO org.apache.hadoop.security.Groups: Group
mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
cacheTimeout=300000
2011-02-22 11:43:57,134 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=MAP, sessionId=
2011-02-22 11:43:57,144 WARN org.apache.hadoop.conf.Configuration:
user.name is deprecated. Instead, use mapreduce.job.user.name
2011-02-22 11:43:57,704 WARN org.apache.hadoop.conf.Configuration:
mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
2011-02-22 12:12:16,822 ERROR org.apache.hadoop.hdfs.DFSClient:
Exception closing file
/user/jianwu/591924608061872993/_temporary/_attempt_201102221033_0025_m_0000
00_0/part-m-00000 : java.io.IOException: Could not complete write to
file
/user/jianwu/591924608061872993/_temporary/_attempt_201102221033_0025_m_000000_0/part-m-0
0000 by DFSClient_attempt_201102221033_0025_m_000000_0
at
org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:720)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
java.io.IOException: Could not complete write to file
/user/jianwu/591924608061872993/_temporary/_attempt_201102221033_0025_m_000000_0/part-m-00000
by DFSClient_at
tempt_201102221033_0025_m_000000_0
at
org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:720)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:342)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1350)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
at org.apache.hadoop.ipc.Client.call(Client.java:905)
at
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy1.complete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.complete(Unknown Source)
at
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:1406)
at
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:1393)
at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1333)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:308)
at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:404)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1830)
at
org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:1846)
--
Best wishes
Sincerely yours
Jianwu Wang
jianwu@sdsc.edu