You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Sridhar Raman <sr...@gmail.com> on 2008/04/29 15:39:24 UTC

Job.jar could only be replicated to 0 nodes, instead of 1(IO Exception)

I am trying to run K-Means using Hadoop.  I first wanted to test it within a
single-node cluster.  And this was the error I got.  What could be the
problem?

$ bin/hadoop jar clustering.jar
com.company.analytics.clustering.mr.core.KMeansDriver
Iteration 0
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/WORK/temp/hadoop/workspace/hadoop-user/mapred/system/job_200804291904_0001/job.jar
could only be replicated to 0 nodes, instead of 1
        at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1003)
        at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:293)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:585)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)
        at org.apache.hadoop.ipc.Client.call(Client.java:482)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:585)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:1554)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1500)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1626)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1733)
        at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:49)
        at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:64)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:55)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:83)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:140)
        at
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:827)
        at
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:815)
        at
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:796)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:493)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:753)
        at
com.company.analytics.clustering.mr.core.KMeansDriver.runIteration(KMeansDriver.java:136)
        at
com.company.analytics.clustering.mr.core.KMeansDriver.runJob(KMeansDriver.java:88)
        at
com.company.analytics.clustering.mr.core.KMeansDriver.main(KMeansDriver.java:34)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:585)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:155)

Re: Job.jar could only be replicated to 0 nodes, instead of 1(IO Exception)

Posted by Sridhar Raman <sr...@gmail.com>.
My data node is up.  The problem went away when I just re-formatted the
folder.  Could have been a problem because of corruption.

On Tue, Apr 29, 2008 at 7:51 PM, Amar Kamat <am...@yahoo-inc.com> wrote:

> Sridhar Raman wrote:
>
> > I am trying to run K-Means using Hadoop.  I first wanted to test it
> > within a
> > single-node cluster.  And this was the error I got.  What could be the
> > problem?
> >
> > $ bin/hadoop jar clustering.jar
> > com.company.analytics.clustering.mr.core.KMeansDriver
> > Iteration 0
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> >
> > /WORK/temp/hadoop/workspace/hadoop-user/mapred/system/job_200804291904_0001/job.jar
> > could only be replicated to 0 nodes, instead of 1
> >
> >
> Check if your datanode is up or not.
> Amar
>
>         at
> >
> > org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1003)
> >        at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:293)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >        at
> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:585)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)
> >        at org.apache.hadoop.ipc.Client.call(Client.java:482)
> >        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
> >        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >        at
> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:585)
> >        at
> >
> > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >        at
> >
> > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> >        at
> >
> > org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:1554)
> >        at
> >
> > org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1500)
> >        at
> >
> > org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1626)
> >        at
> >
> > org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1733)
> >        at
> >
> > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:49)
> >        at
> >
> > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:64)
> >        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:55)
> >        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:83)
> >        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:140)
> >        at
> > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:827)
> >        at
> > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:815)
> >        at
> > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:796)
> >        at
> > org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:493)
> >        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:753)
> >        at
> >
> > com.company.analytics.clustering.mr.core.KMeansDriver.runIteration(KMeansDriver.java:136)
> >        at
> >
> > com.company.analytics.clustering.mr.core.KMeansDriver.runJob(KMeansDriver.java:88)
> >        at
> >
> > com.company.analytics.clustering.mr.core.KMeansDriver.main(KMeansDriver.java:34)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> >
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >        at
> >
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:585)
> >        at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
> >
> >
> >
>
>

Re: Job.jar could only be replicated to 0 nodes, instead of 1(IO Exception)

Posted by Amar Kamat <am...@yahoo-inc.com>.
Sridhar Raman wrote:
> I am trying to run K-Means using Hadoop.  I first wanted to test it within a
> single-node cluster.  And this was the error I got.  What could be the
> problem?
>
> $ bin/hadoop jar clustering.jar
> com.company.analytics.clustering.mr.core.KMeansDriver
> Iteration 0
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /WORK/temp/hadoop/workspace/hadoop-user/mapred/system/job_200804291904_0001/job.jar
> could only be replicated to 0 nodes, instead of 1
>   
Check if your datanode is up or not.
Amar
>         at
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1003)
>         at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:293)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:585)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:379)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:596)
>         at org.apache.hadoop.ipc.Client.call(Client.java:482)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:184)
>         at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:585)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
>         at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:1554)
>         at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1500)
>         at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1626)
>         at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1733)
>         at
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:49)
>         at
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:64)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:55)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:83)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:140)
>         at
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:827)
>         at
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:815)
>         at
> org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:796)
>         at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:493)
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:753)
>         at
> com.company.analytics.clustering.mr.core.KMeansDriver.runIteration(KMeansDriver.java:136)
>         at
> com.company.analytics.clustering.mr.core.KMeansDriver.runJob(KMeansDriver.java:88)
>         at
> com.company.analytics.clustering.mr.core.KMeansDriver.main(KMeansDriver.java:34)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:585)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>
>