You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by javateck javateck <ja...@gmail.com> on 2009/04/22 02:07:31 UTC

anyone knows why setting mapred.tasktracker.map.tasks.maximum not working?

anyone knows why setting *mapred.tasktracker.map.tasks.maximum* not working?
I set it to 10, but still see only 2 map tasks running when running one job

Re: anyone knows why setting mapred.tasktracker.map.tasks.maximum not working?

Posted by javateck javateck <ja...@gmail.com>.
not actually
When I just run a standalone server, meaning the server is a namenode,
datanode, jobtracker and tasktracker, and I configured the map max to 10, I
have 174 62~75 MB files, my block size is 65MB. I can see that 189 map tasks
are generated for this, and only 2 are running, others are waiting.

When I configured another datanode, and have the same settings for
tasktracker, and then the task is running at 12 map tasks for the same task
which produces 189 map tasks, it's using 2 map task slots from my namenode
and 10 slots from my datanode.

I just can't figure out why the namenode is just running at 2 map tasks
while 10 are available.

On Tue, Apr 21, 2009 at 7:47 PM, jason hadoop <ja...@gmail.com>wrote:

> There must be only 2 input splits being produced for your job.
> Either you have 2 unsplitable files, or the input file(s) you have are not
> large enough compared to the block size to be split.
>
> Table 6-1 in chapter 06 gives a breakdown of all of the configuration
> parameters that affect split size in hadoop 0.19. Alphas are available :)
>
> This is detailed in my book in ch06
>
> On Tue, Apr 21, 2009 at 5:07 PM, javateck javateck <javateck@gmail.com
> >wrote:
>
> > anyone knows why setting *mapred.tasktracker.map.tasks.maximum* not
> > working?
> > I set it to 10, but still see only 2 map tasks running when running one
> job
> >
>
>
>
> --
> Alpha Chapters of my book on Hadoop are available
> http://www.apress.com/book/view/9781430219422
>

Re: anyone knows why setting mapred.tasktracker.map.tasks.maximum not working?

Posted by jason hadoop <ja...@gmail.com>.
There must be only 2 input splits being produced for your job.
Either you have 2 unsplitable files, or the input file(s) you have are not
large enough compared to the block size to be split.

Table 6-1 in chapter 06 gives a breakdown of all of the configuration
parameters that affect split size in hadoop 0.19. Alphas are available :)

This is detailed in my book in ch06

On Tue, Apr 21, 2009 at 5:07 PM, javateck javateck <ja...@gmail.com>wrote:

> anyone knows why setting *mapred.tasktracker.map.tasks.maximum* not
> working?
> I set it to 10, but still see only 2 map tasks running when running one job
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422