You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@pig.apache.org by Jieru Shi <cr...@gmail.com> on 2012/11/24 04:19:29 UTC

Fwd: Job Jar file does not exist

Hi
I'm using embeded Pig to implement graph algorithm.
It is fine when I worked in local mode, but when I worked on hadoop cluster,
there always popped up some error message like: (Please see the last few
lines)

2012-11-23 22:00:00,651 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**JobControlCompiler - creating jar file
Job4116346741117365374.jar
2012-11-23 22:00:09,418 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**JobControlCompiler - jar file
Job4116346741117365374.jar created
2012-11-23 22:00:09,423 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**JobControlCompiler - Setting up multi
store job
2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**JobControlCompiler -
BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=296
2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**JobControlCompiler - Neither PARALLEL
nor default parallelism is set for this job. Setting number of reducers to 1
2012-11-23 22:00:09,442 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**MapReduceLauncher - 1 map-reduce job(s)
waiting for submission.
2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**MapReduceLauncher - job null has failed!
Stop running all dependent jobs
2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
executionengine.**mapReduceLayer.**MapReduceLauncher - 100% complete
2012-11-23 22:00:09,992 [main] ERROR
org.apache.pig.tools.pigstats.**SimplePigStats
- ERROR 6015: During execution, encountered a Hadoop error.
2012-11-23 22:00:09,993 [main] ERROR
org.apache.pig.tools.pigstats.**PigStatsUtil
- 1 map reduce job(s) failed!
2012-11-23 22:00:09,994 [main] INFO
org.apache.pig.tools.pigstats.**SimplePigStats
- Script Statistics:

HadoopVersion    PigVersion    UserId    StartedAt    FinishedAt Features
0.20.1    0.10.0    jierus    2012-11-23 21:52:38    2012-11-23 22:00:09
 HASH_JOIN,GROUP_BY,DISTINCT,**FILTER,UNION

Some jobs have failed! Stop running all dependent jobs
Failed Jobs:
JobId    Alias    Feature    Message    Outputs
N/A    vec_comp,vec_comp_final,vec_**comp_tmp HASH_JOIN,MULTI_QUERY
 Message: java.io.FileNotFoundException: File /tmp/Job4116346741117365374.**jar
does not exist.
    at org.apache.hadoop.fs.**RawLocalFileSystem.**getFileStatus(**
RawLocalFileSystem.java:361)
    at org.apache.hadoop.fs.**FilterFileSystem.**getFileStatus(**
FilterFileSystem.java:245)
    at org.apache.hadoop.fs.FileUtil.**copy(FileUtil.java:192)
    at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
FileSystem.java:1184)
    at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
FileSystem.java:1160)
    at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
FileSystem.java:1132)

Does anybody know which part of my code or operation is wrong?

Re: Job Jar file does not exist

Posted by Rohini Palaniswamy <ro...@gmail.com>.
Do you have any cron job that cleans up /tmp directory?

On Fri, Nov 23, 2012 at 7:32 PM, Jieru Shi <cr...@gmail.com> wrote:

> HI Jagat Singh
> I have the permission to write file there.
> My scrip consists of Loops,which means several job will be created.
> It is weird such error happens irregularly, sometimes first job will fail,
> sometimes certain
> job will fail after several successful jobs.
>
>
>
> 2012/11/23 Jagat Singh <ja...@gmail.com>
>
> > First check i would do is the permission check of this temp folder.
> >
> >
> > On Sat, Nov 24, 2012 at 2:19 PM, Jieru Shi <cr...@gmail.com> wrote:
> >
> > > Hi
> > > I'm using embeded Pig to implement graph algorithm.
> > > It is fine when I worked in local mode, but when I worked on hadoop
> > > cluster,
> > > there always popped up some error message like: (Please see the last
> few
> > > lines)
> > >
> > > 2012-11-23 22:00:00,651 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**JobControlCompiler - creating jar
> file
> > > Job4116346741117365374.jar
> > > 2012-11-23 22:00:09,418 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**JobControlCompiler - jar file
> > > Job4116346741117365374.jar created
> > > 2012-11-23 22:00:09,423 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**JobControlCompiler - Setting up
> multi
> > > store job
> > > 2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**JobControlCompiler -
> > > BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=296
> > > 2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**JobControlCompiler - Neither
> PARALLEL
> > > nor default parallelism is set for this job. Setting number of reducers
> > to
> > > 1
> > > 2012-11-23 22:00:09,442 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**MapReduceLauncher - 1 map-reduce
> > job(s)
> > > waiting for submission.
> > > 2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**MapReduceLauncher - job null has
> > failed!
> > > Stop running all dependent jobs
> > > 2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
> > > executionengine.**mapReduceLayer.**MapReduceLauncher - 100% complete
> > > 2012-11-23 22:00:09,992 [main] ERROR
> > > org.apache.pig.tools.pigstats.**SimplePigStats
> > > - ERROR 6015: During execution, encountered a Hadoop error.
> > > 2012-11-23 22:00:09,993 [main] ERROR
> > > org.apache.pig.tools.pigstats.**PigStatsUtil
> > > - 1 map reduce job(s) failed!
> > > 2012-11-23 22:00:09,994 [main] INFO
> > > org.apache.pig.tools.pigstats.**SimplePigStats
> > > - Script Statistics:
> > >
> > > HadoopVersion    PigVersion    UserId    StartedAt    FinishedAt
> Features
> > > 0.20.1    0.10.0    jierus    2012-11-23 21:52:38    2012-11-23
> 22:00:09
> > >  HASH_JOIN,GROUP_BY,DISTINCT,**FILTER,UNION
> > >
> > > Some jobs have failed! Stop running all dependent jobs
> > > Failed Jobs:
> > > JobId    Alias    Feature    Message    Outputs
> > > N/A    vec_comp,vec_comp_final,vec_**comp_tmp HASH_JOIN,MULTI_QUERY
> > >  Message: java.io.FileNotFoundException: File
> > > /tmp/Job4116346741117365374.**jar
> > > does not exist.
> > >     at org.apache.hadoop.fs.**RawLocalFileSystem.**getFileStatus(**
> > > RawLocalFileSystem.java:361)
> > >     at org.apache.hadoop.fs.**FilterFileSystem.**getFileStatus(**
> > > FilterFileSystem.java:245)
> > >     at org.apache.hadoop.fs.FileUtil.**copy(FileUtil.java:192)
> > >     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> > > FileSystem.java:1184)
> > >     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> > > FileSystem.java:1160)
> > >     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> > > FileSystem.java:1132)
> > >
> > > Does anybody know which part of my code or operation is wrong?
> > >
> >
>

Re: Job Jar file does not exist

Posted by Jieru Shi <cr...@gmail.com>.
HI Jagat Singh
I have the permission to write file there.
My scrip consists of Loops,which means several job will be created.
It is weird such error happens irregularly, sometimes first job will fail,
sometimes certain
job will fail after several successful jobs.



2012/11/23 Jagat Singh <ja...@gmail.com>

> First check i would do is the permission check of this temp folder.
>
>
> On Sat, Nov 24, 2012 at 2:19 PM, Jieru Shi <cr...@gmail.com> wrote:
>
> > Hi
> > I'm using embeded Pig to implement graph algorithm.
> > It is fine when I worked in local mode, but when I worked on hadoop
> > cluster,
> > there always popped up some error message like: (Please see the last few
> > lines)
> >
> > 2012-11-23 22:00:00,651 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**JobControlCompiler - creating jar file
> > Job4116346741117365374.jar
> > 2012-11-23 22:00:09,418 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**JobControlCompiler - jar file
> > Job4116346741117365374.jar created
> > 2012-11-23 22:00:09,423 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**JobControlCompiler - Setting up multi
> > store job
> > 2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**JobControlCompiler -
> > BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=296
> > 2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**JobControlCompiler - Neither PARALLEL
> > nor default parallelism is set for this job. Setting number of reducers
> to
> > 1
> > 2012-11-23 22:00:09,442 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**MapReduceLauncher - 1 map-reduce
> job(s)
> > waiting for submission.
> > 2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**MapReduceLauncher - job null has
> failed!
> > Stop running all dependent jobs
> > 2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
> > executionengine.**mapReduceLayer.**MapReduceLauncher - 100% complete
> > 2012-11-23 22:00:09,992 [main] ERROR
> > org.apache.pig.tools.pigstats.**SimplePigStats
> > - ERROR 6015: During execution, encountered a Hadoop error.
> > 2012-11-23 22:00:09,993 [main] ERROR
> > org.apache.pig.tools.pigstats.**PigStatsUtil
> > - 1 map reduce job(s) failed!
> > 2012-11-23 22:00:09,994 [main] INFO
> > org.apache.pig.tools.pigstats.**SimplePigStats
> > - Script Statistics:
> >
> > HadoopVersion    PigVersion    UserId    StartedAt    FinishedAt Features
> > 0.20.1    0.10.0    jierus    2012-11-23 21:52:38    2012-11-23 22:00:09
> >  HASH_JOIN,GROUP_BY,DISTINCT,**FILTER,UNION
> >
> > Some jobs have failed! Stop running all dependent jobs
> > Failed Jobs:
> > JobId    Alias    Feature    Message    Outputs
> > N/A    vec_comp,vec_comp_final,vec_**comp_tmp HASH_JOIN,MULTI_QUERY
> >  Message: java.io.FileNotFoundException: File
> > /tmp/Job4116346741117365374.**jar
> > does not exist.
> >     at org.apache.hadoop.fs.**RawLocalFileSystem.**getFileStatus(**
> > RawLocalFileSystem.java:361)
> >     at org.apache.hadoop.fs.**FilterFileSystem.**getFileStatus(**
> > FilterFileSystem.java:245)
> >     at org.apache.hadoop.fs.FileUtil.**copy(FileUtil.java:192)
> >     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> > FileSystem.java:1184)
> >     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> > FileSystem.java:1160)
> >     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> > FileSystem.java:1132)
> >
> > Does anybody know which part of my code or operation is wrong?
> >
>

Re: Job Jar file does not exist

Posted by Jagat Singh <ja...@gmail.com>.
First check i would do is the permission check of this temp folder.


On Sat, Nov 24, 2012 at 2:19 PM, Jieru Shi <cr...@gmail.com> wrote:

> Hi
> I'm using embeded Pig to implement graph algorithm.
> It is fine when I worked in local mode, but when I worked on hadoop
> cluster,
> there always popped up some error message like: (Please see the last few
> lines)
>
> 2012-11-23 22:00:00,651 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**JobControlCompiler - creating jar file
> Job4116346741117365374.jar
> 2012-11-23 22:00:09,418 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**JobControlCompiler - jar file
> Job4116346741117365374.jar created
> 2012-11-23 22:00:09,423 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**JobControlCompiler - Setting up multi
> store job
> 2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**JobControlCompiler -
> BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=296
> 2012-11-23 22:00:09,431 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**JobControlCompiler - Neither PARALLEL
> nor default parallelism is set for this job. Setting number of reducers to
> 1
> 2012-11-23 22:00:09,442 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**MapReduceLauncher - 1 map-reduce job(s)
> waiting for submission.
> 2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**MapReduceLauncher - job null has failed!
> Stop running all dependent jobs
> 2012-11-23 22:00:09,949 [main] INFO org.apache.pig.backend.hadoop.**
> executionengine.**mapReduceLayer.**MapReduceLauncher - 100% complete
> 2012-11-23 22:00:09,992 [main] ERROR
> org.apache.pig.tools.pigstats.**SimplePigStats
> - ERROR 6015: During execution, encountered a Hadoop error.
> 2012-11-23 22:00:09,993 [main] ERROR
> org.apache.pig.tools.pigstats.**PigStatsUtil
> - 1 map reduce job(s) failed!
> 2012-11-23 22:00:09,994 [main] INFO
> org.apache.pig.tools.pigstats.**SimplePigStats
> - Script Statistics:
>
> HadoopVersion    PigVersion    UserId    StartedAt    FinishedAt Features
> 0.20.1    0.10.0    jierus    2012-11-23 21:52:38    2012-11-23 22:00:09
>  HASH_JOIN,GROUP_BY,DISTINCT,**FILTER,UNION
>
> Some jobs have failed! Stop running all dependent jobs
> Failed Jobs:
> JobId    Alias    Feature    Message    Outputs
> N/A    vec_comp,vec_comp_final,vec_**comp_tmp HASH_JOIN,MULTI_QUERY
>  Message: java.io.FileNotFoundException: File
> /tmp/Job4116346741117365374.**jar
> does not exist.
>     at org.apache.hadoop.fs.**RawLocalFileSystem.**getFileStatus(**
> RawLocalFileSystem.java:361)
>     at org.apache.hadoop.fs.**FilterFileSystem.**getFileStatus(**
> FilterFileSystem.java:245)
>     at org.apache.hadoop.fs.FileUtil.**copy(FileUtil.java:192)
>     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> FileSystem.java:1184)
>     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> FileSystem.java:1160)
>     at org.apache.hadoop.fs.**FileSystem.copyFromLocalFile(**
> FileSystem.java:1132)
>
> Does anybody know which part of my code or operation is wrong?
>