You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Raymond Jennings III <ra...@yahoo.com> on 2009/11/26 15:26:17 UTC

Good idea to run NameNode and JobTracker on same machine?

Do people normally combine these two processes onto one machine?  Currently I have them on separate machines but I am wondering they use that much CPU processing time and maybe I should combine them and create another DataNode.


      

Re: Good idea to run NameNode and JobTracker on same machine?

Posted by Jeff Zhang <zj...@gmail.com>.
It depends on the size of your cluster. I think you can combine them
together if your cluster has less than 10 machines.


Jeff Zhang




On Thu, Nov 26, 2009 at 6:26 AM, Raymond Jennings III <raymondjiii@yahoo.com
> wrote:

> Do people normally combine these two processes onto one machine?  Currently
> I have them on separate machines but I am wondering they use that much CPU
> processing time and maybe I should combine them and create another DataNode.
>
>
>
>

Re: Hadoop 0.20 map/reduce Failing for old API

Posted by Edward Capriolo <ed...@gmail.com>.
On Fri, Nov 27, 2009 at 10:46 AM, Arv Mistry <ar...@kindsight.net> wrote:
> Thanks Rekha, I was missing the new library
> (hadoop-0.20.1-hdfs-core.jar) in my client.
>
> It seems to run a little further but I'm now getting a
> ClassCastException returned by the mapper. Note, this worked with the
> 0.19 load, so I'm assuming there's something additional in the
> configuration that I'm missing. Can anyone help?
>
> java.lang.ClassCastException: org.apache.hadoop.mapred.MultiFileSplit
> cannot be cast to org.apache.hadoop.mapred.FileSplit
>        at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat
> .java:54)
>        at
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:338)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>        at org.apache.hadoop.mapred.Child.main(Child.java:170)
>
> Cheers Arv
>
> -----Original Message-----
> From: Rekha Joshi [mailto:rekhajos@yahoo-inc.com]
> Sent: November 26, 2009 11:45 PM
> To: common-user@hadoop.apache.org
> Subject: Re: Hadoop 0.20 map/reduce Failing for old API
>
> The exit status of 1 usually indicates configuration issues, incorrect
> command invocation in hadoop 0.20 (incorrect params), if not JVM crash.
> In your logs there is no indication of crash, but some paths/command can
> be the cause. Can you check if your lib paths/data paths are correct?
>
> If it is a memory intensive task, you may also try values on
> mapred.child.java.opts /mapred.job.map.memory.mb.Thanks!
>
> On 11/27/09 1:28 AM, "Arv Mistry" <ar...@kindsight.net> wrote:
>
> Hi,
>
> We've recently upgraded to hadoop 0.20. Writing to HDFS seems to be
> working fine, but the map/reduce jobs are failing with the following
> exception. Note, we have not moved to the new map/reduce API yet. In the
> client that launches the job, the only change I have made is to now load
> the three files; core-site.xml, hdfs-site.xml and mapred-site.xml rather
> than the hadoop-site.xml. Any ideas?
>
> INFO   | jvm 1    | 2009/11/26 13:47:26 | 2009-11-26 13:47:26,328 INFO
> [FileInputFormat] Total input paths to process : 711
> INFO   | jvm 1    | 2009/11/26 13:47:28 | 2009-11-26 13:47:28,033 INFO
> [JobClient] Running job: job_200911241319_0003
> INFO   | jvm 1    | 2009/11/26 13:47:29 | 2009-11-26 13:47:29,036 INFO
> [JobClient]  map 0% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,068 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000003_0, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:47:36 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:47:36 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:47:36 |
> INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,094 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000003_0&filter=stdout
> INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,096 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000003_0&filter=stderr
> INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,162 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000000_0, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:47:51 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:47:51 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:47:51 |
> INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,166 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_0&filter=stdout
> INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,167 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_0&filter=stderr
> INFO   | jvm 1    | 2009/11/26 13:47:52 | 2009-11-26 13:47:52,173 INFO
> [JobClient]  map 50% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:48:03 | 2009-11-26 13:48:03,219 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000001_0, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:48:03 | Map output lost, rescheduling:
> getMapOutput(attempt_200911241319_0003_m_000001_0,0) failed :
> INFO   | jvm 1    | 2009/11/26 13:48:03 |
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
> taskTracker/jobcache/job_200911241319_0003/attempt_200911241319_0003_m_0
> 00001_0/output/file.out.index in any of the configured local directories
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathT
> oRead(LocalDirAllocator.java:389)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAlloca
> tor.java:138)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.
> java:2886)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:2
> 16)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandler
> Collection.java:230)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.Server.handle(Server.java:324)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConne
> ction.java:864)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:
> 409)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java
> :522)
> INFO   | jvm 1    | 2009/11/26 13:48:03 |
> INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,235 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000000_1, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:48:06 |
> INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
> process exit with nonzero status of 1.
> INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
> org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
> INFO   | jvm 1    | 2009/11/26 13:48:06 |
> INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,239 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_1&filter=stdout
> INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,245 WARN
> [JobClient] Error reading task
> outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
> d=attempt_200911241319_0003_m_000000_1&filter=stderr
> INFO   | jvm 1    | 2009/11/26 13:48:13 | 2009-11-26 13:48:13,302 INFO
> [JobClient]  map 0% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:48:16 | 2009-11-26 13:48:16,315 INFO
> [JobClient]  map 50% reduce 0%
> INFO   | jvm 1    | 2009/11/26 13:48:18 | 2009-11-26 13:48:18,324 INFO
> [JobClient] Task Id : attempt_200911241319_0003_m_000000_2, Status :
> FAILED
> INFO   | jvm 1    | 2009/11/26 13:48:18 | java.io.IOException: Task
> process exit with nonzero status of 1.
>
>
>

Based on you just adding one jar file and now you are seeing
ClassCastException, you upgrade may have problems. Did you try to
upgrade in the same hadpoop directory and possible left files from the
old install in the same directories with the new ones?

RE: Hadoop 0.20 map/reduce Failing for old API

Posted by Arv Mistry <ar...@kindsight.net>.
Thanks Rekha, I was missing the new library
(hadoop-0.20.1-hdfs-core.jar) in my client.

It seems to run a little further but I'm now getting a
ClassCastException returned by the mapper. Note, this worked with the
0.19 load, so I'm assuming there's something additional in the
configuration that I'm missing. Can anyone help?

java.lang.ClassCastException: org.apache.hadoop.mapred.MultiFileSplit
cannot be cast to org.apache.hadoop.mapred.FileSplit
	at
org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat
.java:54)
	at
org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:338)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)

Cheers Arv

-----Original Message-----
From: Rekha Joshi [mailto:rekhajos@yahoo-inc.com] 
Sent: November 26, 2009 11:45 PM
To: common-user@hadoop.apache.org
Subject: Re: Hadoop 0.20 map/reduce Failing for old API

The exit status of 1 usually indicates configuration issues, incorrect
command invocation in hadoop 0.20 (incorrect params), if not JVM crash.
In your logs there is no indication of crash, but some paths/command can
be the cause. Can you check if your lib paths/data paths are correct?

If it is a memory intensive task, you may also try values on
mapred.child.java.opts /mapred.job.map.memory.mb.Thanks!

On 11/27/09 1:28 AM, "Arv Mistry" <ar...@kindsight.net> wrote:

Hi,

We've recently upgraded to hadoop 0.20. Writing to HDFS seems to be
working fine, but the map/reduce jobs are failing with the following
exception. Note, we have not moved to the new map/reduce API yet. In the
client that launches the job, the only change I have made is to now load
the three files; core-site.xml, hdfs-site.xml and mapred-site.xml rather
than the hadoop-site.xml. Any ideas?

INFO   | jvm 1    | 2009/11/26 13:47:26 | 2009-11-26 13:47:26,328 INFO
[FileInputFormat] Total input paths to process : 711
INFO   | jvm 1    | 2009/11/26 13:47:28 | 2009-11-26 13:47:28,033 INFO
[JobClient] Running job: job_200911241319_0003
INFO   | jvm 1    | 2009/11/26 13:47:29 | 2009-11-26 13:47:29,036 INFO
[JobClient]  map 0% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,068 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000003_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:47:36 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:47:36 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:47:36 |
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,094 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000003_0&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,096 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000003_0&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,162 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:47:51 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:47:51 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:47:51 |
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,166 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_0&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,167 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_0&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:47:52 | 2009-11-26 13:47:52,173 INFO
[JobClient]  map 50% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:03 | 2009-11-26 13:48:03,219 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000001_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:03 | Map output lost, rescheduling:
getMapOutput(attempt_200911241319_0003_m_000001_0,0) failed :
INFO   | jvm 1    | 2009/11/26 13:48:03 |
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
taskTracker/jobcache/job_200911241319_0003/attempt_200911241319_0003_m_0
00001_0/output/file.out.index in any of the configured local directories
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathT
oRead(LocalDirAllocator.java:389)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAlloca
tor.java:138)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.
java:2886)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:2
16)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandler
Collection.java:230)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.Server.handle(Server.java:324)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConne
ction.java:864)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:
409)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java
:522)
INFO   | jvm 1    | 2009/11/26 13:48:03 |
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,235 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_1, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:48:06 |
INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:48:06 |
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,239 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_1&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,245 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_1&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:48:13 | 2009-11-26 13:48:13,302 INFO
[JobClient]  map 0% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:16 | 2009-11-26 13:48:16,315 INFO
[JobClient]  map 50% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:18 | 2009-11-26 13:48:18,324 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_2, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:18 | java.io.IOException: Task
process exit with nonzero status of 1.



Re: Hadoop 0.20 map/reduce Failing for old API

Posted by Rekha Joshi <re...@yahoo-inc.com>.
The exit status of 1 usually indicates configuration issues, incorrect command invocation in hadoop 0.20 (incorrect params), if not JVM crash.
In your logs there is no indication of crash, but some paths/command can be the cause. Can you check if your lib paths/data paths are correct?

If it is a memory intensive task, you may also try values on mapred.child.java.opts /mapred.job.map.memory.mb.Thanks!

On 11/27/09 1:28 AM, "Arv Mistry" <ar...@kindsight.net> wrote:

Hi,

We've recently upgraded to hadoop 0.20. Writing to HDFS seems to be
working fine, but the map/reduce jobs are failing with the following
exception. Note, we have not moved to the new map/reduce API yet. In the
client that launches the job, the only change I have made is to now load
the three files; core-site.xml, hdfs-site.xml and mapred-site.xml rather
than the hadoop-site.xml. Any ideas?

INFO   | jvm 1    | 2009/11/26 13:47:26 | 2009-11-26 13:47:26,328 INFO
[FileInputFormat] Total input paths to process : 711
INFO   | jvm 1    | 2009/11/26 13:47:28 | 2009-11-26 13:47:28,033 INFO
[JobClient] Running job: job_200911241319_0003
INFO   | jvm 1    | 2009/11/26 13:47:29 | 2009-11-26 13:47:29,036 INFO
[JobClient]  map 0% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,068 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000003_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:47:36 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:47:36 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:47:36 |
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,094 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000003_0&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,096 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000003_0&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,162 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:47:51 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:47:51 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:47:51 |
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,166 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_0&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,167 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_0&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:47:52 | 2009-11-26 13:47:52,173 INFO
[JobClient]  map 50% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:03 | 2009-11-26 13:48:03,219 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000001_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:03 | Map output lost, rescheduling:
getMapOutput(attempt_200911241319_0003_m_000001_0,0) failed :
INFO   | jvm 1    | 2009/11/26 13:48:03 |
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
taskTracker/jobcache/job_200911241319_0003/attempt_200911241319_0003_m_0
00001_0/output/file.out.index in any of the configured local directories
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathT
oRead(LocalDirAllocator.java:389)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAlloca
tor.java:138)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.
java:2886)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:2
16)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandler
Collection.java:230)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.Server.handle(Server.java:324)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConne
ction.java:864)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:
409)
INFO   | jvm 1    | 2009/11/26 13:48:03 |       at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java
:522)
INFO   | jvm 1    | 2009/11/26 13:48:03 |
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,235 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_1, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:48:06 |
INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:48:06 |       at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:48:06 |
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,239 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_1&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,245 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_1&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:48:13 | 2009-11-26 13:48:13,302 INFO
[JobClient]  map 0% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:16 | 2009-11-26 13:48:16,315 INFO
[JobClient]  map 50% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:18 | 2009-11-26 13:48:18,324 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_2, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:18 | java.io.IOException: Task
process exit with nonzero status of 1.



Hadoop 0.20 map/reduce Failing for old API

Posted by Arv Mistry <ar...@kindsight.net>.
Hi,

We've recently upgraded to hadoop 0.20. Writing to HDFS seems to be
working fine, but the map/reduce jobs are failing with the following
exception. Note, we have not moved to the new map/reduce API yet. In the
client that launches the job, the only change I have made is to now load
the three files; core-site.xml, hdfs-site.xml and mapred-site.xml rather
than the hadoop-site.xml. Any ideas?

INFO   | jvm 1    | 2009/11/26 13:47:26 | 2009-11-26 13:47:26,328 INFO
[FileInputFormat] Total input paths to process : 711
INFO   | jvm 1    | 2009/11/26 13:47:28 | 2009-11-26 13:47:28,033 INFO
[JobClient] Running job: job_200911241319_0003
INFO   | jvm 1    | 2009/11/26 13:47:29 | 2009-11-26 13:47:29,036 INFO
[JobClient]  map 0% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,068 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000003_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:47:36 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:47:36 | 	at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:47:36 | 
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,094 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000003_0&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:47:36 | 2009-11-26 13:47:36,096 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000003_0&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,162 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:47:51 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:47:51 | 	at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:47:51 | 
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,166 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_0&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:47:51 | 2009-11-26 13:47:51,167 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_0&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:47:52 | 2009-11-26 13:47:52,173 INFO
[JobClient]  map 50% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:03 | 2009-11-26 13:48:03,219 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000001_0, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:03 | Map output lost, rescheduling:
getMapOutput(attempt_200911241319_0003_m_000001_0,0) failed :
INFO   | jvm 1    | 2009/11/26 13:48:03 |
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
taskTracker/jobcache/job_200911241319_0003/attempt_200911241319_0003_m_0
00001_0/output/file.out.index in any of the configured local directories
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathT
oRead(LocalDirAllocator.java:389)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAlloca
tor.java:138)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.apache.hadoop.mapred.TaskTracker$MapOutputServlet.doGet(TaskTracker.
java:2886)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:2
16)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandler
Collection.java:230)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.Server.handle(Server.java:324)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConne
ction.java:864)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:
409)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 	at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java
:522)
INFO   | jvm 1    | 2009/11/26 13:48:03 | 
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,235 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_1, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:48:06 | 	at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:48:06 | 
INFO   | jvm 1    | 2009/11/26 13:48:06 | java.io.IOException: Task
process exit with nonzero status of 1.
INFO   | jvm 1    | 2009/11/26 13:48:06 | 	at
org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:418)
INFO   | jvm 1    | 2009/11/26 13:48:06 | 
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,239 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_1&filter=stdout
INFO   | jvm 1    | 2009/11/26 13:48:06 | 2009-11-26 13:48:06,245 WARN
[JobClient] Error reading task
outputhttp://dev-cs1.ca.kindsight.net:50060/tasklog?plaintext=true&taski
d=attempt_200911241319_0003_m_000000_1&filter=stderr
INFO   | jvm 1    | 2009/11/26 13:48:13 | 2009-11-26 13:48:13,302 INFO
[JobClient]  map 0% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:16 | 2009-11-26 13:48:16,315 INFO
[JobClient]  map 50% reduce 0%
INFO   | jvm 1    | 2009/11/26 13:48:18 | 2009-11-26 13:48:18,324 INFO
[JobClient] Task Id : attempt_200911241319_0003_m_000000_2, Status :
FAILED
INFO   | jvm 1    | 2009/11/26 13:48:18 | java.io.IOException: Task
process exit with nonzero status of 1.


Re: Good idea to run NameNode and JobTracker on same machine?

Posted by Aaron Kimball <aa...@cloudera.com>.
The real kicker is going to be memory consumption of one or both of these
services. The NN in particular uses a large amount of RAM to store the
filesystem image. I think that those who are suggesting a breakeven point of
<= 10 nodes are lowballing. In practice, unless your machines are really
thin on the RAM (e.g., 2--4 GB), I haven't seen any cases where these
services need to be separated below the 20-node mark; I've also seen several
clusters of 40 nodes running fine with these services colocated. It depends
on how many files are in HDFS and how frequently you're submitting a lot of
concurrent jobs to MapReduce.

If you're setting up a production environment that you plan to expand,
however, as a best practice you should configure the master node to have two
hostnames (e.g., "nn" and "jt") so that you can have separate hostnames in
fs.default.name and mapred.job.tracker; when the day comes that these
services are placed on different nodes, you'll then be able to just move one
of the hostnames over and not need to reconfigure all 20--40 other nodes.

- Aaron

On Thu, Nov 26, 2009 at 8:27 PM, Srigurunath Chakravarthi <
sriguru@yahoo-inc.com> wrote:

> Raymond,
> Load wise, it should be very safe to run both JT and NN on a single node
> for small clusters (< 40 Task Trackers and/or Data Nodes). They don't use
> much CPU as such.
>
>  This may even work for larger clusters depending on the type of hardware
> you have and the Hadoop job mix. We usually observe < 5% CPU load with ~80
> DNs/TTs on an 8-code Intel processor based box with 16GB RAM.
>
>  It is best that you observe CPU & mem load on the JT+NN node to take a
> call on whether to separate them. iostat, top or sar should tell you.
>
> Regards,
> Sriguru
>
> >-----Original Message-----
> >From: John Martyniak [mailto:john@beforedawnsolutions.com]
> >Sent: Friday, November 27, 2009 3:06 AM
> >To: common-user@hadoop.apache.org
> >Cc: <co...@hadoop.apache.org>
> >Subject: Re: Good idea to run NameNode and JobTracker on same machine?
> >
> >I have a cluster of 4 machines plus one machine to run nn & jt.  I
> >have heard that 5 or 6 is the magic #.  I will see when I add the next
> >batch of machines.
> >
> >And it seems to running fine.
> >
> >-Jogn
> >
> >On Nov 26, 2009, at 11:38 AM, Yongqiang He <he...@gmail.com>
> >wrote:
> >
> >> I think it is definitely not a good idea to combine these two in
> >> production
> >> environment.
> >>
> >> Thanks
> >> Yongqiang
> >> On 11/26/09 6:26 AM, "Raymond Jennings III" <ra...@yahoo.com>
> >> wrote:
> >>
> >>> Do people normally combine these two processes onto one machine?
> >>> Currently I
> >>> have them on separate machines but I am wondering they use that
> >>> much CPU
> >>> processing time and maybe I should combine them and create another
> >>> DataNode.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>
> >>
>

RE: Good idea to run NameNode and JobTracker on same machine?

Posted by Srigurunath Chakravarthi <sr...@yahoo-inc.com>.
Raymond,
Load wise, it should be very safe to run both JT and NN on a single node for small clusters (< 40 Task Trackers and/or Data Nodes). They don't use much CPU as such.

 This may even work for larger clusters depending on the type of hardware you have and the Hadoop job mix. We usually observe < 5% CPU load with ~80 DNs/TTs on an 8-code Intel processor based box with 16GB RAM.

 It is best that you observe CPU & mem load on the JT+NN node to take a call on whether to separate them. iostat, top or sar should tell you.

Regards,
Sriguru

>-----Original Message-----
>From: John Martyniak [mailto:john@beforedawnsolutions.com]
>Sent: Friday, November 27, 2009 3:06 AM
>To: common-user@hadoop.apache.org
>Cc: <co...@hadoop.apache.org>
>Subject: Re: Good idea to run NameNode and JobTracker on same machine?
>
>I have a cluster of 4 machines plus one machine to run nn & jt.  I
>have heard that 5 or 6 is the magic #.  I will see when I add the next
>batch of machines.
>
>And it seems to running fine.
>
>-Jogn
>
>On Nov 26, 2009, at 11:38 AM, Yongqiang He <he...@gmail.com>
>wrote:
>
>> I think it is definitely not a good idea to combine these two in
>> production
>> environment.
>>
>> Thanks
>> Yongqiang
>> On 11/26/09 6:26 AM, "Raymond Jennings III" <ra...@yahoo.com>
>> wrote:
>>
>>> Do people normally combine these two processes onto one machine?
>>> Currently I
>>> have them on separate machines but I am wondering they use that
>>> much CPU
>>> processing time and maybe I should combine them and create another
>>> DataNode.
>>>
>>>
>>>
>>>
>>>
>>
>>

Re: Good idea to run NameNode and JobTracker on same machine?

Posted by John Martyniak <jo...@beforedawnsolutions.com>.
I have a cluster of 4 machines plus one machine to run nn & jt.  I  
have heard that 5 or 6 is the magic #.  I will see when I add the next  
batch of machines.

And it seems to running fine.

-Jogn

On Nov 26, 2009, at 11:38 AM, Yongqiang He <he...@gmail.com>  
wrote:

> I think it is definitely not a good idea to combine these two in  
> production
> environment.
>
> Thanks
> Yongqiang
> On 11/26/09 6:26 AM, "Raymond Jennings III" <ra...@yahoo.com>  
> wrote:
>
>> Do people normally combine these two processes onto one machine?   
>> Currently I
>> have them on separate machines but I am wondering they use that  
>> much CPU
>> processing time and maybe I should combine them and create another  
>> DataNode.
>>
>>
>>
>>
>>
>
>

Re: Good idea to run NameNode and JobTracker on same machine?

Posted by Yongqiang He <he...@gmail.com>.
I think it is definitely not a good idea to combine these two in production
environment.

Thanks
Yongqiang
On 11/26/09 6:26 AM, "Raymond Jennings III" <ra...@yahoo.com> wrote:

> Do people normally combine these two processes onto one machine?  Currently I
> have them on separate machines but I am wondering they use that much CPU
> processing time and maybe I should combine them and create another DataNode.
> 
> 
>       
> 
>