You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Harsh J <ha...@cloudera.com> on 2012/01/29 09:44:46 UTC

Re: Cannot open filename - in mapreduce program

Hi Vamshi,

This question does not look specific to HBase. Moving to
mapreduce-user@hadoop.apache.org. Please use the appropriate lists for
your questions. (Bcc'd user@hbase.apache.org, cc'd OP).

For your problem, I believe your mistake may lie in thinking that the
output path of your previous job is a direct, singular file. MapReduce
file-based-job outputs are usually within a directory, and hence
/user/hduser/IP/hs4 and /user/hduser/IP/hs3 are probably directories
themselves.

You can confirm by doing a simple "fs -ls" of these paths, which'd
list their real files underneath.

On Sun, Jan 29, 2012 at 12:41 PM, Vamshi Krishna <va...@gmail.com> wrote:
> Hi , iam trying to run 2 small mapreduce programs , for which i
> gave /user/hduser/IP/hs3   and   /user/hduser/IP/hs4,  as their respective
> outputs. After they finished running, i ran one more small mapreduce
> program for which i gave /user/hduser/IP   as input path,  i.e IP is a
> folder which has two files called hs3 and hs4, written by the 2 previously
> ran mapreduce examples. . Here i am getting following error.
>
> Here i have a doubt, can't we give the output of one mapreduce program to
> other directly. I think i am doing the same here, right?
> Please can any body help.
>
> 12/01/29 12:19:21 INFO input.FileInputFormat: Total input paths to process
> : 2
> 12/01/29 12:19:21 INFO mapred.JobClient: Running job: job_201201291059_0011
> 12/01/29 12:19:22 INFO mapred.JobClient:  map 0% reduce 0%
> 12/01/29 12:19:34 INFO mapred.JobClient: Task Id :
> attempt_201201291059_0011_m_000000_0, Status : FAILED
> java.io.IOException: Cannot open filename /user/hduser/IP/hs3
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1497)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1488)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:376)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
> at
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:67)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:418)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:620)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> 12/01/29 12:19:34 INFO mapred.JobClient: Task Id :
> attempt_201201291059_0011_m_000001_0, Status : FAILED
> java.io.IOException: Cannot open filename /user/hduser/IP/hs4
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1497)
> at
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1488)
> at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:376)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
> at
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:67)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:418)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:620)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
>
>
>
> --
> *Regards*
> *
> Vamshi Krishna
> *



-- 
Harsh J
Customer Ops. Engineer, Cloudera