You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by john li <li...@gmail.com> on 2010/01/26 02:25:21 UTC
hadoop jobs stop at state 4(prepare)!
three jobs in my hadoop cluster have stopped at state 4 for five hours!!!
my hadoop become more and more slow sice i begin setup it, i think mybe
because the hadoop filesystem become more and more
larger? here is bin/hadoop fs -count result:
401141 824522 659714994647 hdfs://hadoop01:9000/user/hadoop
things become very bad since three days ago when i begin runing five more
jobs simultaneously. a job need two minutes now need
eight more minutes, and i noticed that the jobs submited to hadoop stay at
state 4 in the job queue for long time, about one minute and more.
now , three jobs stay at state 4 for five hours because the last completed
job finished at five hours ago!
here is the bin/hadoop job -list result:
3 jobs currently running
JobId State StartTime UserName Priority
SchedulingInfo
job_201001221137_7050 4 1264453449856 hadoop NORMAL
job_201001221137_7051 4 1264453919364 hadoop NORMAL
job_201001221137_7052 4 1264453927418 hadoop NORMAL
can any body help me?
--
Regards
Junyong
Re: hadoop jobs stop at state 4(prepare)!
Posted by john li <li...@gmail.com>.
I found the reason the jobtracker.log:
2010-01-26 05:06:25,624 ERROR mapred.EagerTaskInitializationListener - Job
initialization failed:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.regex.Pattern.compile(Pattern.java:1451)
at java.util.regex.Pattern.<init>(Pattern.java:1133)
at java.util.regex.Pattern.compile(Pattern.java:847)
at java.lang.String.replace(String.java:2207)
at org.apache.hadoop.fs.Path.normalizePath(Path.java:147)
at org.apache.hadoop.fs.Path.initialize(Path.java:137)
at org.apache.hadoop.fs.Path.<init>(Path.java:126)
at org.apache.hadoop.fs.Path.<init>(Path.java:50)
at
org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:723)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:748)
at
org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:457)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:723)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:748)
at
org.apache.hadoop.mapred.JobHistory$JobInfo.getJobHistoryFileName(JobHistory.java:662)
at
org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:805)
at
org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:360)
at
org.apache.hadoop.mapred.EagerTaskInitializationListener$JobInitThread.run(EagerTaskInitializationListener.java:55)
at java.lang.Thread.run(Thread.java:619)
2010/1/26 john li <li...@gmail.com>
> three jobs in my hadoop cluster have stopped at state 4 for five hours!!!
>
> my hadoop become more and more slow sice i begin setup it, i think mybe
> because the hadoop filesystem become more and more
> larger? here is bin/hadoop fs -count result:
> 401141 824522 659714994647 hdfs://hadoop01:9000/user/hadoop
>
> things become very bad since three days ago when i begin runing five more
> jobs simultaneously. a job need two minutes now need
> eight more minutes, and i noticed that the jobs submited to hadoop stay at
> state 4 in the job queue for long time, about one minute and more.
> now , three jobs stay at state 4 for five hours because the last completed
> job finished at five hours ago!
> here is the bin/hadoop job -list result:
>
> 3 jobs currently running
> JobId State StartTime UserName Priority
> SchedulingInfo
> job_201001221137_7050 4 1264453449856 hadoop NORMAL
> job_201001221137_7051 4 1264453919364 hadoop NORMAL
> job_201001221137_7052 4 1264453927418 hadoop NORMAL
>
>
> can any body help me?
> --
> Regards
> Junyong
>
--
Regards
Junyong