You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Nate Carlson <ha...@natecarlson.com> on 2008/03/25 21:54:23 UTC

Sharing Hadoop slave nodes between multiple masters?

Is it possible to have a single slave process jobs for multiple masters?

If not, I guess I'll just run multiple slaves on the same machines.  ;)

(Trying to share slaves for our dev/staging/qa environments)

Thanks!

-Nate

Re: Sharing Hadoop slave nodes between multiple masters?

Posted by Nate Carlson <na...@natecarlson.com>.
On Wed, 26 Mar 2008, Amar Kamat wrote:
> There are two types of slaves and 2 corresponding masters in Hadoop. The 
> 2 masters are Namenode and JobTracker while the slaves are datanodes and 
> tasktrackers resp. Each slave when started has a hardcoded master 
> information in the config that is passed to the slave during the 
> start-up. So sharing would not be possible.

*nods*

That's kind of what I figured, but thought I'd ask in case someone else 
has worked on this before.

> Yes. Seems like. But be sure to have a control on the number of tasks 
> that can be run on the machine. Commonly used config is 4 maps and 4 
> reducers for a (mapred) slave that is not shared (i.e per machine). Try 
> to make sure that the total tasks that can be run simultaneously is 
> descent enough. Amar

Yeah... in most cases, the environments won't be running their slaves at 
the same time, so I'm not too worried about that. I'll see if I can work 
out a way to set hard limits though.

Thanks!

------------------------------------------------------------------------
| nate carlson | natecars@natecarlson.com | http://www.natecarlson.com |
|       depriving some poor village of its idiot since 1981            |
------------------------------------------------------------------------

Re: Sharing Hadoop slave nodes between multiple masters?

Posted by Amar Kamat <am...@yahoo-inc.com>.
On Tue, 25 Mar 2008, Nate Carlson wrote:

> Is it possible to have a single slave process jobs for multiple masters?
There are two types of slaves and 2 corresponding masters in Hadoop. The 2
masters are Namenode and JobTracker while the slaves are datanodes and
tasktrackers resp. Each slave when started has a hardcoded master
information in the config that is passed to the slave during the start-up.
So sharing would not be possible.
>
> If not, I guess I'll just run multiple slaves on the same machines.  ;)
Yes. Seems like. But be sure to have a control on the number of tasks that
can be run on the machine. Commonly used config is 4 maps and 4 reducers
for a (mapred) slave that is not shared (i.e per machine). Try to make
sure that the total tasks that can be run simultaneously is descent
enough.
Amar
>
> (Trying to share slaves for our dev/staging/qa environments)
>
> Thanks!
>
> -Nate
>