You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Saurabh Agarwal <sr...@gmail.com> on 2010/05/26 13:08:25 UTC

job conf object

Hii..
I am toying around with hadoop configuration.
I am trying to replace HDFS with a common nfsmount, I only have map tasks.
so intermediate outputs need not be communicated!!!
So I want is there a way to make the temp directory local to the nodes and
place job conf object and jar in a nfs mount so all the nodes can access
it..
Saurabh Agarwal

Re: job conf object

Posted by Vinod KV <vi...@yahoo-inc.com>.
On Wednesday 26 May 2010 04:38 PM, Saurabh Agarwal wrote:
> Hii..
> I am toying around with hadoop configuration.
> I am trying to replace HDFS with a common nfsmount, I only have map tasks.
> so intermediate outputs need not be communicated!!!
> So I want is there a way to make the temp directory local to the nodes and
> place job conf object and jar in a nfs mount so all the nodes can access
> it..
> Saurabh Agarwal
>    

Ideally you can do it, because MapReduce uses FileSystem APIs 
everywhere, but you may find some quirks.

OTOH, it is a very very bad idea and highly discouraged to run MapReduce 
on NFS - as soon as the number of nodes and thus tasks scales up, NFS 
will become bottlenecked and tasks/jobs will start failing with hard to 
debug failures.

+Vinod