You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by exception <ex...@taomee.com> on 2010/10/26 09:01:13 UTC
running job without jar
Hi,
When launching a job in hadoop, usually we use "hadoop jar xxx.jar input output".
Can I run a job simply by a java program (In fully distributed mode), without packing a jar.
I know this will cause problems because the remote nodes don't have the source code and .class to run the mapper/reducer. So my next question is how to tell hadoop to copy user specified files to different nodes?
Thanks.
Re: running job without jar
Posted by David Rosenstrauch <da...@darose.net>.
On 10/26/2010 03:01 AM, exception wrote:
> Hi,
>
> When launching a job in hadoop, usually we use "hadoop jar xxx.jar
> input output". Can I run a job simply by a java program (In fully
> distributed mode), without packing a jar. I know this will cause
> problems because the remote nodes don't have the source code and
> .class to run the mapper/reducer. So my next question is how to tell
> hadoop to copy user specified files to different nodes?
>
> Thanks.
>
Yes, this can be easily done. See:
http://www.mail-archive.com/mapreduce-user@hadoop.apache.org/msg01114.html
HTH,
DR
Re: running job without jar
Posted by jingguo yao <ya...@gmail.com>.
As far as I know, there is no way to run a Java class on a cluster.
To copy user specified files to different nodes, you can use Distributed
Cache.
On Tue, Oct 26, 2010 at 3:01 PM, exception <ex...@taomee.com> wrote:
> Hi,
>
>
>
> When launching a job in hadoop, usually we use “hadoop jar xxx.jar input
> output”.
>
> Can I run a job simply by a java program (In fully distributed mode),
> without packing a jar.
>
> I know this will cause problems because the remote nodes don’t have the
> source code and .class to run the mapper/reducer. So my next question is how
> to tell hadoop to copy user specified files to different nodes?
>
>
>
> Thanks.
>