You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Justin Workman <ju...@gmail.com> on 2013/01/16 16:45:21 UTC

Re: Fair Scheduler is not Fair why?

Looks like weight for both pools is equal and all map slots are used.
Therefore I don't believe anyone has priority for the next slots. Try
setting research weight to 2. This should allow research to take slots as
tech released them.

Sent from my iPhone

On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
wrote:

HI Guys

We configured fair scheduler with cdh4, Fair scheduler not work properly.
Map Task Capacity = 1380
Reduce Task Capacity = 720

We create two users tech and research, we configured equal weight 1 But, I
stared job in research user mapper will not allocated why?
please guide me guys.

<?xml version="1.0"?>
<allocations>
<pool name="tech">
  <minMaps>5</minMaps>
  <minReduces>5</minReduces>
  <maxRunningJobs>30</maxRunningJobs>
  <weight>1.0</weight>
</pool>
<pool name="research">
  <minMaps>5</minMaps>
  <minReduces>5</minReduces>
  <maxRunningJobs>30</maxRunningJobs>
  <weight>1.0</weight>
</pool>
</allocations>

Note: we have tested with Hadoop Stream job.

Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce
TasksScheduling
Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair Share
research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-00.00-0
0.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce Tasks
FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan 16, 08:51
job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 2400.0
1.0 Jan 16, 09:56
job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
0.0 1.0 Jan 16, 10:01
job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 2400.0
1.0 Jan 16, 10:08
job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
/ 4506363.0 1.0 0 / 242424.0 1.0

--

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Hi Dhanasekaran,

The issue is not with Hadoop streaming. You can try this yourself:

On your local disk, touch a bunch of files, like this:

mkdir stream
cd stream
touch 1 2 3 4 5 6 7 8 9 9 10

Then, put the files into HDFS:

hadoop fs -put stream stream

Now, put a unix sleep command into a shell script:

echo sleep 10 > sleepten.sh

Now you have all the ingredients you need to submit a hadoop streaming
sleep job to test the scheduler.

Submit sleepten.sh as a mapper, input directory stream, hadoop streaming
will launch ten mappers (one per file).

Here's what I did:

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D
mypool=research -input stream -output bar -mapper ./sleepten.sh -file
./sleepten.sh

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D mypool=tech
-input stream -output baz -mapper ./sleepten.sh -file ./sleepten.sh

I have my cluster configured with the poolname property set is "mypool".
This launches two jobs, 10 mappers each and the scheduler evenly fairly
divides the tasks between research and tech.

Jeff

On Wed, Jan 16, 2013 at 10:07 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com
> wrote:

> HI Jeff,
>
> thanks for kindly mail, I have tested sleep job working pretty good. But
> we have tested with Hadoop streaming job not proper with fair
> scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
> fair scheduler
>
> Note:
> Tested with RHadoop with rmr.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:
>
>> Validate your scheduler capacity and behavior by using sleep jobs. Submit
>> sleep jobs to the pools that mirror your production jobs and just check
>> that the scheduler pool allocation behaves as you expect. The nice thing
>> about sleep is that you can mimic your real jobs: numbers of tasks and how
>> long they run.
>>
>> You should be able to determine that the hypothesis posed on this thread
>> is correct: that all the slots are taken by other tasks. Indeed, your UI
>> says that research has 90 running tasks after having completed over 4000,
>> but your emails says no tasks are scheduled. I'm a little confused.
>>
>> Jeff
>>
>>
>> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>>
>>> an alternative way to achieve that is to set minimum share of two queues
>>> to be equal(or other allocation scheme you like), and sum of them is equal
>>> to the capacity of the cluster, and enable minimumSharePreemption
>>>
>>> Good Luck!
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>>
>>>  I think you should do that, so that when the allocation is
>>> inconsistent with fair share, the tasks in the queue which occupies more
>>> beyond it's fair share will be killed, and the available slots would be
>>> assigned to the other one (assuming the weights of them are the same)
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>>
>>> HI Nan,
>>>
>>> We have not enabled Fair Scheduler Preemption.
>>>
>>> -Dhanasekaran.
>>>
>>> Did I learn something today? If not, I wasted it.
>>>
>>>
>>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>>
>>>  have you enabled task preemption?
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>>
>>> Looks like weight for both pools is equal and all map slots are used.
>>> Therefore I don't believe anyone has priority for the next slots. Try
>>> setting research weight to 2. This should allow research to take slots as
>>> tech released them.
>>>
>>> Sent from my iPhone
>>>
>>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>>> wrote:
>>>
>>>  HI Guys
>>>
>>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>>> Map Task Capacity = 1380
>>> Reduce Task Capacity = 720
>>>
>>> We create two users tech and research, we configured equal weight 1 But,
>>> I stared job in research user mapper will not allocated why?
>>> please guide me guys.
>>>
>>> <?xml version="1.0"?>
>>> <allocations>
>>> <pool name="tech">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> <pool name="research">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> </allocations>
>>>
>>> Note: we have tested with Hadoop Stream job.
>>>
>>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce
>>> TasksScheduling Mode Min ShareMax ShareRunningFair ShareMin ShareMax
>>> ShareRunningFair Share research15-90690.05-00.0FAIR tech35-1266690.05-24
>>> 24.0FAIR default00-00.00-00.0FAIR Running Jobs SubmittedJobIDUserName
>>> PoolPriorityMap TasksReduce Tasks FinishedRunningFair ShareWeight
>>> FinishedRunningFair ShareWeight Jan 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 24
>>> 00.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 /
>>> 2400.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>>> / 4506363.0 1.0 0 / 242424.0 1.0
>>>
>>> --
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Hi Dhanasekaran,

The issue is not with Hadoop streaming. You can try this yourself:

On your local disk, touch a bunch of files, like this:

mkdir stream
cd stream
touch 1 2 3 4 5 6 7 8 9 9 10

Then, put the files into HDFS:

hadoop fs -put stream stream

Now, put a unix sleep command into a shell script:

echo sleep 10 > sleepten.sh

Now you have all the ingredients you need to submit a hadoop streaming
sleep job to test the scheduler.

Submit sleepten.sh as a mapper, input directory stream, hadoop streaming
will launch ten mappers (one per file).

Here's what I did:

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D
mypool=research -input stream -output bar -mapper ./sleepten.sh -file
./sleepten.sh

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D mypool=tech
-input stream -output baz -mapper ./sleepten.sh -file ./sleepten.sh

I have my cluster configured with the poolname property set is "mypool".
This launches two jobs, 10 mappers each and the scheduler evenly fairly
divides the tasks between research and tech.

Jeff

On Wed, Jan 16, 2013 at 10:07 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com
> wrote:

> HI Jeff,
>
> thanks for kindly mail, I have tested sleep job working pretty good. But
> we have tested with Hadoop streaming job not proper with fair
> scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
> fair scheduler
>
> Note:
> Tested with RHadoop with rmr.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:
>
>> Validate your scheduler capacity and behavior by using sleep jobs. Submit
>> sleep jobs to the pools that mirror your production jobs and just check
>> that the scheduler pool allocation behaves as you expect. The nice thing
>> about sleep is that you can mimic your real jobs: numbers of tasks and how
>> long they run.
>>
>> You should be able to determine that the hypothesis posed on this thread
>> is correct: that all the slots are taken by other tasks. Indeed, your UI
>> says that research has 90 running tasks after having completed over 4000,
>> but your emails says no tasks are scheduled. I'm a little confused.
>>
>> Jeff
>>
>>
>> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>>
>>> an alternative way to achieve that is to set minimum share of two queues
>>> to be equal(or other allocation scheme you like), and sum of them is equal
>>> to the capacity of the cluster, and enable minimumSharePreemption
>>>
>>> Good Luck!
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>>
>>>  I think you should do that, so that when the allocation is
>>> inconsistent with fair share, the tasks in the queue which occupies more
>>> beyond it's fair share will be killed, and the available slots would be
>>> assigned to the other one (assuming the weights of them are the same)
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>>
>>> HI Nan,
>>>
>>> We have not enabled Fair Scheduler Preemption.
>>>
>>> -Dhanasekaran.
>>>
>>> Did I learn something today? If not, I wasted it.
>>>
>>>
>>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>>
>>>  have you enabled task preemption?
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>>
>>> Looks like weight for both pools is equal and all map slots are used.
>>> Therefore I don't believe anyone has priority for the next slots. Try
>>> setting research weight to 2. This should allow research to take slots as
>>> tech released them.
>>>
>>> Sent from my iPhone
>>>
>>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>>> wrote:
>>>
>>>  HI Guys
>>>
>>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>>> Map Task Capacity = 1380
>>> Reduce Task Capacity = 720
>>>
>>> We create two users tech and research, we configured equal weight 1 But,
>>> I stared job in research user mapper will not allocated why?
>>> please guide me guys.
>>>
>>> <?xml version="1.0"?>
>>> <allocations>
>>> <pool name="tech">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> <pool name="research">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> </allocations>
>>>
>>> Note: we have tested with Hadoop Stream job.
>>>
>>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce
>>> TasksScheduling Mode Min ShareMax ShareRunningFair ShareMin ShareMax
>>> ShareRunningFair Share research15-90690.05-00.0FAIR tech35-1266690.05-24
>>> 24.0FAIR default00-00.00-00.0FAIR Running Jobs SubmittedJobIDUserName
>>> PoolPriorityMap TasksReduce Tasks FinishedRunningFair ShareWeight
>>> FinishedRunningFair ShareWeight Jan 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 24
>>> 00.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 /
>>> 2400.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>>> / 4506363.0 1.0 0 / 242424.0 1.0
>>>
>>> --
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Hi Dhanasekaran,

The issue is not with Hadoop streaming. You can try this yourself:

On your local disk, touch a bunch of files, like this:

mkdir stream
cd stream
touch 1 2 3 4 5 6 7 8 9 9 10

Then, put the files into HDFS:

hadoop fs -put stream stream

Now, put a unix sleep command into a shell script:

echo sleep 10 > sleepten.sh

Now you have all the ingredients you need to submit a hadoop streaming
sleep job to test the scheduler.

Submit sleepten.sh as a mapper, input directory stream, hadoop streaming
will launch ten mappers (one per file).

Here's what I did:

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D
mypool=research -input stream -output bar -mapper ./sleepten.sh -file
./sleepten.sh

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D mypool=tech
-input stream -output baz -mapper ./sleepten.sh -file ./sleepten.sh

I have my cluster configured with the poolname property set is "mypool".
This launches two jobs, 10 mappers each and the scheduler evenly fairly
divides the tasks between research and tech.

Jeff

On Wed, Jan 16, 2013 at 10:07 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com
> wrote:

> HI Jeff,
>
> thanks for kindly mail, I have tested sleep job working pretty good. But
> we have tested with Hadoop streaming job not proper with fair
> scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
> fair scheduler
>
> Note:
> Tested with RHadoop with rmr.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:
>
>> Validate your scheduler capacity and behavior by using sleep jobs. Submit
>> sleep jobs to the pools that mirror your production jobs and just check
>> that the scheduler pool allocation behaves as you expect. The nice thing
>> about sleep is that you can mimic your real jobs: numbers of tasks and how
>> long they run.
>>
>> You should be able to determine that the hypothesis posed on this thread
>> is correct: that all the slots are taken by other tasks. Indeed, your UI
>> says that research has 90 running tasks after having completed over 4000,
>> but your emails says no tasks are scheduled. I'm a little confused.
>>
>> Jeff
>>
>>
>> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>>
>>> an alternative way to achieve that is to set minimum share of two queues
>>> to be equal(or other allocation scheme you like), and sum of them is equal
>>> to the capacity of the cluster, and enable minimumSharePreemption
>>>
>>> Good Luck!
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>>
>>>  I think you should do that, so that when the allocation is
>>> inconsistent with fair share, the tasks in the queue which occupies more
>>> beyond it's fair share will be killed, and the available slots would be
>>> assigned to the other one (assuming the weights of them are the same)
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>>
>>> HI Nan,
>>>
>>> We have not enabled Fair Scheduler Preemption.
>>>
>>> -Dhanasekaran.
>>>
>>> Did I learn something today? If not, I wasted it.
>>>
>>>
>>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>>
>>>  have you enabled task preemption?
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>>
>>> Looks like weight for both pools is equal and all map slots are used.
>>> Therefore I don't believe anyone has priority for the next slots. Try
>>> setting research weight to 2. This should allow research to take slots as
>>> tech released them.
>>>
>>> Sent from my iPhone
>>>
>>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>>> wrote:
>>>
>>>  HI Guys
>>>
>>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>>> Map Task Capacity = 1380
>>> Reduce Task Capacity = 720
>>>
>>> We create two users tech and research, we configured equal weight 1 But,
>>> I stared job in research user mapper will not allocated why?
>>> please guide me guys.
>>>
>>> <?xml version="1.0"?>
>>> <allocations>
>>> <pool name="tech">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> <pool name="research">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> </allocations>
>>>
>>> Note: we have tested with Hadoop Stream job.
>>>
>>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce
>>> TasksScheduling Mode Min ShareMax ShareRunningFair ShareMin ShareMax
>>> ShareRunningFair Share research15-90690.05-00.0FAIR tech35-1266690.05-24
>>> 24.0FAIR default00-00.00-00.0FAIR Running Jobs SubmittedJobIDUserName
>>> PoolPriorityMap TasksReduce Tasks FinishedRunningFair ShareWeight
>>> FinishedRunningFair ShareWeight Jan 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 24
>>> 00.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 /
>>> 2400.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>>> / 4506363.0 1.0 0 / 242424.0 1.0
>>>
>>> --
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Hi Dhanasekaran,

The issue is not with Hadoop streaming. You can try this yourself:

On your local disk, touch a bunch of files, like this:

mkdir stream
cd stream
touch 1 2 3 4 5 6 7 8 9 9 10

Then, put the files into HDFS:

hadoop fs -put stream stream

Now, put a unix sleep command into a shell script:

echo sleep 10 > sleepten.sh

Now you have all the ingredients you need to submit a hadoop streaming
sleep job to test the scheduler.

Submit sleepten.sh as a mapper, input directory stream, hadoop streaming
will launch ten mappers (one per file).

Here's what I did:

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D
mypool=research -input stream -output bar -mapper ./sleepten.sh -file
./sleepten.sh

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -D mypool=tech
-input stream -output baz -mapper ./sleepten.sh -file ./sleepten.sh

I have my cluster configured with the poolname property set is "mypool".
This launches two jobs, 10 mappers each and the scheduler evenly fairly
divides the tasks between research and tech.

Jeff

On Wed, Jan 16, 2013 at 10:07 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com
> wrote:

> HI Jeff,
>
> thanks for kindly mail, I have tested sleep job working pretty good. But
> we have tested with Hadoop streaming job not proper with fair
> scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
> fair scheduler
>
> Note:
> Tested with RHadoop with rmr.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:
>
>> Validate your scheduler capacity and behavior by using sleep jobs. Submit
>> sleep jobs to the pools that mirror your production jobs and just check
>> that the scheduler pool allocation behaves as you expect. The nice thing
>> about sleep is that you can mimic your real jobs: numbers of tasks and how
>> long they run.
>>
>> You should be able to determine that the hypothesis posed on this thread
>> is correct: that all the slots are taken by other tasks. Indeed, your UI
>> says that research has 90 running tasks after having completed over 4000,
>> but your emails says no tasks are scheduled. I'm a little confused.
>>
>> Jeff
>>
>>
>> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>>
>>> an alternative way to achieve that is to set minimum share of two queues
>>> to be equal(or other allocation scheme you like), and sum of them is equal
>>> to the capacity of the cluster, and enable minimumSharePreemption
>>>
>>> Good Luck!
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>>
>>>  I think you should do that, so that when the allocation is
>>> inconsistent with fair share, the tasks in the queue which occupies more
>>> beyond it's fair share will be killed, and the available slots would be
>>> assigned to the other one (assuming the weights of them are the same)
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>>
>>> HI Nan,
>>>
>>> We have not enabled Fair Scheduler Preemption.
>>>
>>> -Dhanasekaran.
>>>
>>> Did I learn something today? If not, I wasted it.
>>>
>>>
>>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>>
>>>  have you enabled task preemption?
>>>
>>> Best,
>>>
>>> --
>>> Nan Zhu
>>> School of Computer Science,
>>> McGill University
>>>
>>>
>>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>>
>>> Looks like weight for both pools is equal and all map slots are used.
>>> Therefore I don't believe anyone has priority for the next slots. Try
>>> setting research weight to 2. This should allow research to take slots as
>>> tech released them.
>>>
>>> Sent from my iPhone
>>>
>>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>>> wrote:
>>>
>>>  HI Guys
>>>
>>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>>> Map Task Capacity = 1380
>>> Reduce Task Capacity = 720
>>>
>>> We create two users tech and research, we configured equal weight 1 But,
>>> I stared job in research user mapper will not allocated why?
>>> please guide me guys.
>>>
>>> <?xml version="1.0"?>
>>> <allocations>
>>> <pool name="tech">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> <pool name="research">
>>>   <minMaps>5</minMaps>
>>>   <minReduces>5</minReduces>
>>>   <maxRunningJobs>30</maxRunningJobs>
>>>   <weight>1.0</weight>
>>> </pool>
>>> </allocations>
>>>
>>> Note: we have tested with Hadoop Stream job.
>>>
>>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce
>>> TasksScheduling Mode Min ShareMax ShareRunningFair ShareMin ShareMax
>>> ShareRunningFair Share research15-90690.05-00.0FAIR tech35-1266690.05-24
>>> 24.0FAIR default00-00.00-00.0FAIR Running Jobs SubmittedJobIDUserName
>>> PoolPriorityMap TasksReduce Tasks FinishedRunningFair ShareWeight
>>> FinishedRunningFair ShareWeight Jan 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 24
>>> 00.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 /
>>> 2400.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>>> / 4506363.0 1.0 0 / 242424.0 1.0
>>>
>>> --
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Jeff,

thanks for kindly mail, I have tested sleep job working pretty good. But we
have tested with Hadoop streaming job not proper with fair
scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
fair scheduler

Note:
Tested with RHadoop with rmr.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:

> Validate your scheduler capacity and behavior by using sleep jobs. Submit
> sleep jobs to the pools that mirror your production jobs and just check
> that the scheduler pool allocation behaves as you expect. The nice thing
> about sleep is that you can mimic your real jobs: numbers of tasks and how
> long they run.
>
> You should be able to determine that the hypothesis posed on this thread
> is correct: that all the slots are taken by other tasks. Indeed, your UI
> says that research has 90 running tasks after having completed over 4000,
> but your emails says no tasks are scheduled. I'm a little confused.
>
> Jeff
>
>
> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>
>> an alternative way to achieve that is to set minimum share of two queues
>> to be equal(or other allocation scheme you like), and sum of them is equal
>> to the capacity of the cluster, and enable minimumSharePreemption
>>
>> Good Luck!
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>
>>  I think you should do that, so that when the allocation is inconsistent
>> with fair share, the tasks in the queue which occupies more beyond it's
>> fair share will be killed, and the available slots would be assigned to the
>> other one (assuming the weights of them are the same)
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>
>> HI Nan,
>>
>> We have not enabled Fair Scheduler Preemption.
>>
>> -Dhanasekaran.
>>
>> Did I learn something today? If not, I wasted it.
>>
>>
>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>  have you enabled task preemption?
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>
>> Looks like weight for both pools is equal and all map slots are used.
>> Therefore I don't believe anyone has priority for the next slots. Try
>> setting research weight to 2. This should allow research to take slots as
>> tech released them.
>>
>> Sent from my iPhone
>>
>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>> wrote:
>>
>>  HI Guys
>>
>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>> Map Task Capacity = 1380
>> Reduce Task Capacity = 720
>>
>> We create two users tech and research, we configured equal weight 1 But,
>> I stared job in research user mapper will not allocated why?
>> please guide me guys.
>>
>> <?xml version="1.0"?>
>> <allocations>
>> <pool name="tech">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> <pool name="research">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> </allocations>
>>
>> Note: we have tested with Hadoop Stream job.
>>
>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
>> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
>> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00
>> -00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
>> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
>> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 24
>> 00.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>> / 4506363.0 1.0 0 / 242424.0 1.0
>>
>> --
>>
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Jeff,

thanks for kindly mail, I have tested sleep job working pretty good. But we
have tested with Hadoop streaming job not proper with fair
scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
fair scheduler

Note:
Tested with RHadoop with rmr.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:

> Validate your scheduler capacity and behavior by using sleep jobs. Submit
> sleep jobs to the pools that mirror your production jobs and just check
> that the scheduler pool allocation behaves as you expect. The nice thing
> about sleep is that you can mimic your real jobs: numbers of tasks and how
> long they run.
>
> You should be able to determine that the hypothesis posed on this thread
> is correct: that all the slots are taken by other tasks. Indeed, your UI
> says that research has 90 running tasks after having completed over 4000,
> but your emails says no tasks are scheduled. I'm a little confused.
>
> Jeff
>
>
> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>
>> an alternative way to achieve that is to set minimum share of two queues
>> to be equal(or other allocation scheme you like), and sum of them is equal
>> to the capacity of the cluster, and enable minimumSharePreemption
>>
>> Good Luck!
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>
>>  I think you should do that, so that when the allocation is inconsistent
>> with fair share, the tasks in the queue which occupies more beyond it's
>> fair share will be killed, and the available slots would be assigned to the
>> other one (assuming the weights of them are the same)
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>
>> HI Nan,
>>
>> We have not enabled Fair Scheduler Preemption.
>>
>> -Dhanasekaran.
>>
>> Did I learn something today? If not, I wasted it.
>>
>>
>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>  have you enabled task preemption?
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>
>> Looks like weight for both pools is equal and all map slots are used.
>> Therefore I don't believe anyone has priority for the next slots. Try
>> setting research weight to 2. This should allow research to take slots as
>> tech released them.
>>
>> Sent from my iPhone
>>
>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>> wrote:
>>
>>  HI Guys
>>
>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>> Map Task Capacity = 1380
>> Reduce Task Capacity = 720
>>
>> We create two users tech and research, we configured equal weight 1 But,
>> I stared job in research user mapper will not allocated why?
>> please guide me guys.
>>
>> <?xml version="1.0"?>
>> <allocations>
>> <pool name="tech">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> <pool name="research">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> </allocations>
>>
>> Note: we have tested with Hadoop Stream job.
>>
>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
>> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
>> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00
>> -00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
>> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
>> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 24
>> 00.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>> / 4506363.0 1.0 0 / 242424.0 1.0
>>
>> --
>>
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Jeff,

thanks for kindly mail, I have tested sleep job working pretty good. But we
have tested with Hadoop streaming job not proper with fair
scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
fair scheduler

Note:
Tested with RHadoop with rmr.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:

> Validate your scheduler capacity and behavior by using sleep jobs. Submit
> sleep jobs to the pools that mirror your production jobs and just check
> that the scheduler pool allocation behaves as you expect. The nice thing
> about sleep is that you can mimic your real jobs: numbers of tasks and how
> long they run.
>
> You should be able to determine that the hypothesis posed on this thread
> is correct: that all the slots are taken by other tasks. Indeed, your UI
> says that research has 90 running tasks after having completed over 4000,
> but your emails says no tasks are scheduled. I'm a little confused.
>
> Jeff
>
>
> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>
>> an alternative way to achieve that is to set minimum share of two queues
>> to be equal(or other allocation scheme you like), and sum of them is equal
>> to the capacity of the cluster, and enable minimumSharePreemption
>>
>> Good Luck!
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>
>>  I think you should do that, so that when the allocation is inconsistent
>> with fair share, the tasks in the queue which occupies more beyond it's
>> fair share will be killed, and the available slots would be assigned to the
>> other one (assuming the weights of them are the same)
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>
>> HI Nan,
>>
>> We have not enabled Fair Scheduler Preemption.
>>
>> -Dhanasekaran.
>>
>> Did I learn something today? If not, I wasted it.
>>
>>
>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>  have you enabled task preemption?
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>
>> Looks like weight for both pools is equal and all map slots are used.
>> Therefore I don't believe anyone has priority for the next slots. Try
>> setting research weight to 2. This should allow research to take slots as
>> tech released them.
>>
>> Sent from my iPhone
>>
>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>> wrote:
>>
>>  HI Guys
>>
>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>> Map Task Capacity = 1380
>> Reduce Task Capacity = 720
>>
>> We create two users tech and research, we configured equal weight 1 But,
>> I stared job in research user mapper will not allocated why?
>> please guide me guys.
>>
>> <?xml version="1.0"?>
>> <allocations>
>> <pool name="tech">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> <pool name="research">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> </allocations>
>>
>> Note: we have tested with Hadoop Stream job.
>>
>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
>> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
>> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00
>> -00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
>> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
>> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 24
>> 00.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>> / 4506363.0 1.0 0 / 242424.0 1.0
>>
>> --
>>
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Jeff,

thanks for kindly mail, I have tested sleep job working pretty good. But we
have tested with Hadoop streaming job not proper with fair
scheduling Algorithm why?.  Any other way test Hadoop streaming job, with
fair scheduler

Note:
Tested with RHadoop with rmr.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 12:02 PM, Jeff Bean <jw...@cloudera.com> wrote:

> Validate your scheduler capacity and behavior by using sleep jobs. Submit
> sleep jobs to the pools that mirror your production jobs and just check
> that the scheduler pool allocation behaves as you expect. The nice thing
> about sleep is that you can mimic your real jobs: numbers of tasks and how
> long they run.
>
> You should be able to determine that the hypothesis posed on this thread
> is correct: that all the slots are taken by other tasks. Indeed, your UI
> says that research has 90 running tasks after having completed over 4000,
> but your emails says no tasks are scheduled. I'm a little confused.
>
> Jeff
>
>
> On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>> BTW, what I mentioned is fairsharepreemption  not minimum share
>>
>> an alternative way to achieve that is to set minimum share of two queues
>> to be equal(or other allocation scheme you like), and sum of them is equal
>> to the capacity of the cluster, and enable minimumSharePreemption
>>
>> Good Luck!
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>>
>>  I think you should do that, so that when the allocation is inconsistent
>> with fair share, the tasks in the queue which occupies more beyond it's
>> fair share will be killed, and the available slots would be assigned to the
>> other one (assuming the weights of them are the same)
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>>
>> HI Nan,
>>
>> We have not enabled Fair Scheduler Preemption.
>>
>> -Dhanasekaran.
>>
>> Did I learn something today? If not, I wasted it.
>>
>>
>> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>>
>>  have you enabled task preemption?
>>
>> Best,
>>
>> --
>> Nan Zhu
>> School of Computer Science,
>> McGill University
>>
>>
>> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>>
>> Looks like weight for both pools is equal and all map slots are used.
>> Therefore I don't believe anyone has priority for the next slots. Try
>> setting research weight to 2. This should allow research to take slots as
>> tech released them.
>>
>> Sent from my iPhone
>>
>> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
>> wrote:
>>
>>  HI Guys
>>
>> We configured fair scheduler with cdh4, Fair scheduler not work properly.
>> Map Task Capacity = 1380
>> Reduce Task Capacity = 720
>>
>> We create two users tech and research, we configured equal weight 1 But,
>> I stared job in research user mapper will not allocated why?
>> please guide me guys.
>>
>> <?xml version="1.0"?>
>> <allocations>
>> <pool name="tech">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> <pool name="research">
>>   <minMaps>5</minMaps>
>>   <minReduces>5</minReduces>
>>   <maxRunningJobs>30</maxRunningJobs>
>>   <weight>1.0</weight>
>> </pool>
>> </allocations>
>>
>> Note: we have tested with Hadoop Stream job.
>>
>> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
>> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
>> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00
>> -00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
>> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
>> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
>> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
>> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 24
>> 00.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
>> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
>> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
>> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
>> / 4506363.0 1.0 0 / 242424.0 1.0
>>
>> --
>>
>>
>>
>>
>>
>>
>>
>>
>>
>

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Validate your scheduler capacity and behavior by using sleep jobs. Submit
sleep jobs to the pools that mirror your production jobs and just check
that the scheduler pool allocation behaves as you expect. The nice thing
about sleep is that you can mimic your real jobs: numbers of tasks and how
long they run.

You should be able to determine that the hypothesis posed on this thread is
correct: that all the slots are taken by other tasks. Indeed, your UI says
that research has 90 running tasks after having completed over 4000, but
your emails says no tasks are scheduled. I'm a little confused.

Jeff

On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:

> BTW, what I mentioned is fairsharepreemption  not minimum share
>
> an alternative way to achieve that is to set minimum share of two queues
> to be equal(or other allocation scheme you like), and sum of them is equal
> to the capacity of the cluster, and enable minimumSharePreemption
>
> Good Luck!
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>
>  I think you should do that, so that when the allocation is inconsistent
> with fair share, the tasks in the queue which occupies more beyond it's
> fair share will be killed, and the available slots would be assigned to the
> other one (assuming the weights of them are the same)
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>
> HI Nan,
>
> We have not enabled Fair Scheduler Preemption.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Validate your scheduler capacity and behavior by using sleep jobs. Submit
sleep jobs to the pools that mirror your production jobs and just check
that the scheduler pool allocation behaves as you expect. The nice thing
about sleep is that you can mimic your real jobs: numbers of tasks and how
long they run.

You should be able to determine that the hypothesis posed on this thread is
correct: that all the slots are taken by other tasks. Indeed, your UI says
that research has 90 running tasks after having completed over 4000, but
your emails says no tasks are scheduled. I'm a little confused.

Jeff

On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:

> BTW, what I mentioned is fairsharepreemption  not minimum share
>
> an alternative way to achieve that is to set minimum share of two queues
> to be equal(or other allocation scheme you like), and sum of them is equal
> to the capacity of the cluster, and enable minimumSharePreemption
>
> Good Luck!
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>
>  I think you should do that, so that when the allocation is inconsistent
> with fair share, the tasks in the queue which occupies more beyond it's
> fair share will be killed, and the available slots would be assigned to the
> other one (assuming the weights of them are the same)
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>
> HI Nan,
>
> We have not enabled Fair Scheduler Preemption.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Validate your scheduler capacity and behavior by using sleep jobs. Submit
sleep jobs to the pools that mirror your production jobs and just check
that the scheduler pool allocation behaves as you expect. The nice thing
about sleep is that you can mimic your real jobs: numbers of tasks and how
long they run.

You should be able to determine that the hypothesis posed on this thread is
correct: that all the slots are taken by other tasks. Indeed, your UI says
that research has 90 running tasks after having completed over 4000, but
your emails says no tasks are scheduled. I'm a little confused.

Jeff

On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:

> BTW, what I mentioned is fairsharepreemption  not minimum share
>
> an alternative way to achieve that is to set minimum share of two queues
> to be equal(or other allocation scheme you like), and sum of them is equal
> to the capacity of the cluster, and enable minimumSharePreemption
>
> Good Luck!
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>
>  I think you should do that, so that when the allocation is inconsistent
> with fair share, the tasks in the queue which occupies more beyond it's
> fair share will be killed, and the available slots would be assigned to the
> other one (assuming the weights of them are the same)
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>
> HI Nan,
>
> We have not enabled Fair Scheduler Preemption.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Jeff Bean <jw...@cloudera.com>.
Validate your scheduler capacity and behavior by using sleep jobs. Submit
sleep jobs to the pools that mirror your production jobs and just check
that the scheduler pool allocation behaves as you expect. The nice thing
about sleep is that you can mimic your real jobs: numbers of tasks and how
long they run.

You should be able to determine that the hypothesis posed on this thread is
correct: that all the slots are taken by other tasks. Indeed, your UI says
that research has 90 running tasks after having completed over 4000, but
your emails says no tasks are scheduled. I'm a little confused.

Jeff

On Wed, Jan 16, 2013 at 8:50 AM, Nan Zhu <zh...@gmail.com> wrote:

> BTW, what I mentioned is fairsharepreemption  not minimum share
>
> an alternative way to achieve that is to set minimum share of two queues
> to be equal(or other allocation scheme you like), and sum of them is equal
> to the capacity of the cluster, and enable minimumSharePreemption
>
> Good Luck!
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:
>
>  I think you should do that, so that when the allocation is inconsistent
> with fair share, the tasks in the queue which occupies more beyond it's
> fair share will be killed, and the available slots would be assigned to the
> other one (assuming the weights of them are the same)
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
>
> HI Nan,
>
> We have not enabled Fair Scheduler Preemption.
>
> -Dhanasekaran.
>
> Did I learn something today? If not, I wasted it.
>
>
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:
>
>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
BTW, what I mentioned is fairsharepreemption  not minimum share 

an alternative way to achieve that is to set minimum share of two queues to be equal(or other allocation scheme you like), and sum of them is equal to the capacity of the cluster, and enable minimumSharePreemption

Good Luck!

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:

> I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)
> 
> Best, 
> 
> -- 
> Nan Zhu
> School of Computer Science,
> McGill University
> 
> 
> 
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
> 
> > HI Nan,
> > 
> > We have not enabled Fair Scheduler Preemption.
> > 
> > -Dhanasekaran. 
> > 
> > Did I learn something today? If not, I wasted it. 
> > 
> > On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > > have you enabled task preemption? 
> > > 
> > > Best, 
> > > 
> > > -- 
> > > Nan Zhu
> > > School of Computer Science,
> > > McGill University
> > > 
> > > 
> > > 
> > > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > > 
> > > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > > 
> > > > Sent from my iPhone
> > > > 
> > > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > > 
> > > > > HI Guys
> > > > > 
> > > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > > Map Task Capacity = 1380
> > > > > Reduce Task Capacity = 720 
> > > > > 
> > > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > > please guide me guys.
> > > > > 
> > > > > <?xml version="1.0"?>
> > > > > <allocations>
> > > > > <pool name="tech"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs>
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > <pool name="research"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > </allocations>
> > > > > 
> > > > > Note: we have tested with Hadoop Stream job.
> > > > > 
> > > > > Fair Scheduler Administration 
> > > > > Pools
> > > > > Pool
> > > > > Running Jobs
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > Scheduling Mode
> > > > > 
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > 
> > > > > research
> > > > > 1
> > > > > 5
> > > > > -
> > > > > 90
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > tech
> > > > > 3
> > > > > 5
> > > > > -
> > > > > 1266
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 24
> > > > > 24.0
> > > > > FAIR
> > > > > 
> > > > > default
> > > > > 0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > 
> > > > > 
> > > > > Running Jobs
> > > > > Submitted
> > > > > JobID
> > > > > User
> > > > > Name
> > > > > Pool
> > > > > Priority
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > 
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > 
> > > > > Jan 16, 08:51
> > > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > > tech
> > > > > streamjob5335328828469969152.jar
> > > > > 
> > > > > 
> > > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > > 583
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 09:56
> > > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > > research
> > > > > streamjob8832181817213433660.jar
> > > > > 
> > > > > 
> > > > > 4175 / 9581
> > > > > 90
> > > > > 690.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:01
> > > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > > tech
> > > > > streamjob8773848575543653055.jar
> > > > > 
> > > > > 
> > > > > 1842 / 15484
> > > > > 620
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:08
> > > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > > tech
> > > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > > 
> > > > > 
> > > > > 387 / 450
> > > > > 63
> > > > > 63.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 24
> > > > > 24.0
> > > > > 1.0
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > -- 
> > > > >  
> > > > >  
> > > > >  
> > > 
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
BTW, what I mentioned is fairsharepreemption  not minimum share 

an alternative way to achieve that is to set minimum share of two queues to be equal(or other allocation scheme you like), and sum of them is equal to the capacity of the cluster, and enable minimumSharePreemption

Good Luck!

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:

> I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)
> 
> Best, 
> 
> -- 
> Nan Zhu
> School of Computer Science,
> McGill University
> 
> 
> 
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
> 
> > HI Nan,
> > 
> > We have not enabled Fair Scheduler Preemption.
> > 
> > -Dhanasekaran. 
> > 
> > Did I learn something today? If not, I wasted it. 
> > 
> > On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > > have you enabled task preemption? 
> > > 
> > > Best, 
> > > 
> > > -- 
> > > Nan Zhu
> > > School of Computer Science,
> > > McGill University
> > > 
> > > 
> > > 
> > > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > > 
> > > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > > 
> > > > Sent from my iPhone
> > > > 
> > > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > > 
> > > > > HI Guys
> > > > > 
> > > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > > Map Task Capacity = 1380
> > > > > Reduce Task Capacity = 720 
> > > > > 
> > > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > > please guide me guys.
> > > > > 
> > > > > <?xml version="1.0"?>
> > > > > <allocations>
> > > > > <pool name="tech"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs>
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > <pool name="research"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > </allocations>
> > > > > 
> > > > > Note: we have tested with Hadoop Stream job.
> > > > > 
> > > > > Fair Scheduler Administration 
> > > > > Pools
> > > > > Pool
> > > > > Running Jobs
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > Scheduling Mode
> > > > > 
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > 
> > > > > research
> > > > > 1
> > > > > 5
> > > > > -
> > > > > 90
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > tech
> > > > > 3
> > > > > 5
> > > > > -
> > > > > 1266
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 24
> > > > > 24.0
> > > > > FAIR
> > > > > 
> > > > > default
> > > > > 0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > 
> > > > > 
> > > > > Running Jobs
> > > > > Submitted
> > > > > JobID
> > > > > User
> > > > > Name
> > > > > Pool
> > > > > Priority
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > 
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > 
> > > > > Jan 16, 08:51
> > > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > > tech
> > > > > streamjob5335328828469969152.jar
> > > > > 
> > > > > 
> > > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > > 583
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 09:56
> > > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > > research
> > > > > streamjob8832181817213433660.jar
> > > > > 
> > > > > 
> > > > > 4175 / 9581
> > > > > 90
> > > > > 690.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:01
> > > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > > tech
> > > > > streamjob8773848575543653055.jar
> > > > > 
> > > > > 
> > > > > 1842 / 15484
> > > > > 620
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:08
> > > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > > tech
> > > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > > 
> > > > > 
> > > > > 387 / 450
> > > > > 63
> > > > > 63.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 24
> > > > > 24.0
> > > > > 1.0
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > -- 
> > > > >  
> > > > >  
> > > > >  
> > > 
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
BTW, what I mentioned is fairsharepreemption  not minimum share 

an alternative way to achieve that is to set minimum share of two queues to be equal(or other allocation scheme you like), and sum of them is equal to the capacity of the cluster, and enable minimumSharePreemption

Good Luck!

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:

> I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)
> 
> Best, 
> 
> -- 
> Nan Zhu
> School of Computer Science,
> McGill University
> 
> 
> 
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
> 
> > HI Nan,
> > 
> > We have not enabled Fair Scheduler Preemption.
> > 
> > -Dhanasekaran. 
> > 
> > Did I learn something today? If not, I wasted it. 
> > 
> > On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > > have you enabled task preemption? 
> > > 
> > > Best, 
> > > 
> > > -- 
> > > Nan Zhu
> > > School of Computer Science,
> > > McGill University
> > > 
> > > 
> > > 
> > > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > > 
> > > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > > 
> > > > Sent from my iPhone
> > > > 
> > > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > > 
> > > > > HI Guys
> > > > > 
> > > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > > Map Task Capacity = 1380
> > > > > Reduce Task Capacity = 720 
> > > > > 
> > > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > > please guide me guys.
> > > > > 
> > > > > <?xml version="1.0"?>
> > > > > <allocations>
> > > > > <pool name="tech"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs>
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > <pool name="research"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > </allocations>
> > > > > 
> > > > > Note: we have tested with Hadoop Stream job.
> > > > > 
> > > > > Fair Scheduler Administration 
> > > > > Pools
> > > > > Pool
> > > > > Running Jobs
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > Scheduling Mode
> > > > > 
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > 
> > > > > research
> > > > > 1
> > > > > 5
> > > > > -
> > > > > 90
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > tech
> > > > > 3
> > > > > 5
> > > > > -
> > > > > 1266
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 24
> > > > > 24.0
> > > > > FAIR
> > > > > 
> > > > > default
> > > > > 0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > 
> > > > > 
> > > > > Running Jobs
> > > > > Submitted
> > > > > JobID
> > > > > User
> > > > > Name
> > > > > Pool
> > > > > Priority
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > 
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > 
> > > > > Jan 16, 08:51
> > > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > > tech
> > > > > streamjob5335328828469969152.jar
> > > > > 
> > > > > 
> > > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > > 583
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 09:56
> > > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > > research
> > > > > streamjob8832181817213433660.jar
> > > > > 
> > > > > 
> > > > > 4175 / 9581
> > > > > 90
> > > > > 690.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:01
> > > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > > tech
> > > > > streamjob8773848575543653055.jar
> > > > > 
> > > > > 
> > > > > 1842 / 15484
> > > > > 620
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:08
> > > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > > tech
> > > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > > 
> > > > > 
> > > > > 387 / 450
> > > > > 63
> > > > > 63.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 24
> > > > > 24.0
> > > > > 1.0
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > -- 
> > > > >  
> > > > >  
> > > > >  
> > > 
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
BTW, what I mentioned is fairsharepreemption  not minimum share 

an alternative way to achieve that is to set minimum share of two queues to be equal(or other allocation scheme you like), and sum of them is equal to the capacity of the cluster, and enable minimumSharePreemption

Good Luck!

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:43 AM, Nan Zhu wrote:

> I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)
> 
> Best, 
> 
> -- 
> Nan Zhu
> School of Computer Science,
> McGill University
> 
> 
> 
> On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:
> 
> > HI Nan,
> > 
> > We have not enabled Fair Scheduler Preemption.
> > 
> > -Dhanasekaran. 
> > 
> > Did I learn something today? If not, I wasted it. 
> > 
> > On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > > have you enabled task preemption? 
> > > 
> > > Best, 
> > > 
> > > -- 
> > > Nan Zhu
> > > School of Computer Science,
> > > McGill University
> > > 
> > > 
> > > 
> > > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > > 
> > > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > > 
> > > > Sent from my iPhone
> > > > 
> > > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > > 
> > > > > HI Guys
> > > > > 
> > > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > > Map Task Capacity = 1380
> > > > > Reduce Task Capacity = 720 
> > > > > 
> > > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > > please guide me guys.
> > > > > 
> > > > > <?xml version="1.0"?>
> > > > > <allocations>
> > > > > <pool name="tech"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs>
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > <pool name="research"> 
> > > > >   <minMaps>5</minMaps> 
> > > > >   <minReduces>5</minReduces> 
> > > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > > >   <weight>1.0</weight> 
> > > > > </pool>
> > > > > </allocations>
> > > > > 
> > > > > Note: we have tested with Hadoop Stream job.
> > > > > 
> > > > > Fair Scheduler Administration 
> > > > > Pools
> > > > > Pool
> > > > > Running Jobs
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > Scheduling Mode
> > > > > 
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > Min Share
> > > > > Max Share
> > > > > Running
> > > > > Fair Share
> > > > > 
> > > > > research
> > > > > 1
> > > > > 5
> > > > > -
> > > > > 90
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > tech
> > > > > 3
> > > > > 5
> > > > > -
> > > > > 1266
> > > > > 690.0
> > > > > 5
> > > > > -
> > > > > 24
> > > > > 24.0
> > > > > FAIR
> > > > > 
> > > > > default
> > > > > 0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > 0
> > > > > -
> > > > > 0
> > > > > 0.0
> > > > > FAIR
> > > > > 
> > > > > 
> > > > > 
> > > > > Running Jobs
> > > > > Submitted
> > > > > JobID
> > > > > User
> > > > > Name
> > > > > Pool
> > > > > Priority
> > > > > Map Tasks
> > > > > Reduce Tasks
> > > > > 
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > Finished
> > > > > Running
> > > > > Fair Share
> > > > > Weight
> > > > > 
> > > > > Jan 16, 08:51
> > > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > > tech
> > > > > streamjob5335328828469969152.jar
> > > > > 
> > > > > 
> > > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > > 583
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 09:56
> > > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > > research
> > > > > streamjob8832181817213433660.jar
> > > > > 
> > > > > 
> > > > > 4175 / 9581
> > > > > 90
> > > > > 690.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:01
> > > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > > tech
> > > > > streamjob8773848575543653055.jar
> > > > > 
> > > > > 
> > > > > 1842 / 15484
> > > > > 620
> > > > > 313.5
> > > > > 1.0
> > > > > 0 / 24
> > > > > 0
> > > > > 0.0
> > > > > 1.0
> > > > > 
> > > > > Jan 16, 10:08
> > > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > > tech
> > > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > > 
> > > > > 
> > > > > 387 / 450
> > > > > 63
> > > > > 63.0
> > > > > 1.0
> > > > > 0 / 24
> > > > > 24
> > > > > 24.0
> > > > > 1.0
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > -- 
> > > > >  
> > > > >  
> > > > >  
> > > 
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:

> HI Nan,
> 
> We have not enabled Fair Scheduler Preemption.
> 
> -Dhanasekaran. 
> 
> Did I learn something today? If not, I wasted it. 
> 
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > have you enabled task preemption? 
> > 
> > Best, 
> > 
> > -- 
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> > 
> > 
> > 
> > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > 
> > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > 
> > > Sent from my iPhone
> > > 
> > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > 
> > > > HI Guys
> > > > 
> > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > Map Task Capacity = 1380
> > > > Reduce Task Capacity = 720 
> > > > 
> > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > please guide me guys.
> > > > 
> > > > <?xml version="1.0"?>
> > > > <allocations>
> > > > <pool name="tech"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs>
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > <pool name="research"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > </allocations>
> > > > 
> > > > Note: we have tested with Hadoop Stream job.
> > > > 
> > > > Fair Scheduler Administration 
> > > > Pools
> > > > Pool
> > > > Running Jobs
> > > > Map Tasks
> > > > Reduce Tasks
> > > > Scheduling Mode
> > > > 
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > 
> > > > research
> > > > 1
> > > > 5
> > > > -
> > > > 90
> > > > 690.0
> > > > 5
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > tech
> > > > 3
> > > > 5
> > > > -
> > > > 1266
> > > > 690.0
> > > > 5
> > > > -
> > > > 24
> > > > 24.0
> > > > FAIR
> > > > 
> > > > default
> > > > 0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > 
> > > > 
> > > > Running Jobs
> > > > Submitted
> > > > JobID
> > > > User
> > > > Name
> > > > Pool
> > > > Priority
> > > > Map Tasks
> > > > Reduce Tasks
> > > > 
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > 
> > > > Jan 16, 08:51
> > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > tech
> > > > streamjob5335328828469969152.jar
> > > > 
> > > > 
> > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > 583
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 09:56
> > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > research
> > > > streamjob8832181817213433660.jar
> > > > 
> > > > 
> > > > 4175 / 9581
> > > > 90
> > > > 690.0
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:01
> > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > tech
> > > > streamjob8773848575543653055.jar
> > > > 
> > > > 
> > > > 1842 / 15484
> > > > 620
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:08
> > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > tech
> > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > 
> > > > 
> > > > 387 / 450
> > > > 63
> > > > 63.0
> > > > 1.0
> > > > 0 / 24
> > > > 24
> > > > 24.0
> > > > 1.0
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > -- 
> > > >  
> > > >  
> > > >  
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:

> HI Nan,
> 
> We have not enabled Fair Scheduler Preemption.
> 
> -Dhanasekaran. 
> 
> Did I learn something today? If not, I wasted it. 
> 
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > have you enabled task preemption? 
> > 
> > Best, 
> > 
> > -- 
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> > 
> > 
> > 
> > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > 
> > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > 
> > > Sent from my iPhone
> > > 
> > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > 
> > > > HI Guys
> > > > 
> > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > Map Task Capacity = 1380
> > > > Reduce Task Capacity = 720 
> > > > 
> > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > please guide me guys.
> > > > 
> > > > <?xml version="1.0"?>
> > > > <allocations>
> > > > <pool name="tech"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs>
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > <pool name="research"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > </allocations>
> > > > 
> > > > Note: we have tested with Hadoop Stream job.
> > > > 
> > > > Fair Scheduler Administration 
> > > > Pools
> > > > Pool
> > > > Running Jobs
> > > > Map Tasks
> > > > Reduce Tasks
> > > > Scheduling Mode
> > > > 
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > 
> > > > research
> > > > 1
> > > > 5
> > > > -
> > > > 90
> > > > 690.0
> > > > 5
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > tech
> > > > 3
> > > > 5
> > > > -
> > > > 1266
> > > > 690.0
> > > > 5
> > > > -
> > > > 24
> > > > 24.0
> > > > FAIR
> > > > 
> > > > default
> > > > 0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > 
> > > > 
> > > > Running Jobs
> > > > Submitted
> > > > JobID
> > > > User
> > > > Name
> > > > Pool
> > > > Priority
> > > > Map Tasks
> > > > Reduce Tasks
> > > > 
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > 
> > > > Jan 16, 08:51
> > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > tech
> > > > streamjob5335328828469969152.jar
> > > > 
> > > > 
> > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > 583
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 09:56
> > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > research
> > > > streamjob8832181817213433660.jar
> > > > 
> > > > 
> > > > 4175 / 9581
> > > > 90
> > > > 690.0
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:01
> > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > tech
> > > > streamjob8773848575543653055.jar
> > > > 
> > > > 
> > > > 1842 / 15484
> > > > 620
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:08
> > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > tech
> > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > 
> > > > 
> > > > 387 / 450
> > > > 63
> > > > 63.0
> > > > 1.0
> > > > 0 / 24
> > > > 24
> > > > 24.0
> > > > 1.0
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > -- 
> > > >  
> > > >  
> > > >  
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:

> HI Nan,
> 
> We have not enabled Fair Scheduler Preemption.
> 
> -Dhanasekaran. 
> 
> Did I learn something today? If not, I wasted it. 
> 
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > have you enabled task preemption? 
> > 
> > Best, 
> > 
> > -- 
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> > 
> > 
> > 
> > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > 
> > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > 
> > > Sent from my iPhone
> > > 
> > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > 
> > > > HI Guys
> > > > 
> > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > Map Task Capacity = 1380
> > > > Reduce Task Capacity = 720 
> > > > 
> > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > please guide me guys.
> > > > 
> > > > <?xml version="1.0"?>
> > > > <allocations>
> > > > <pool name="tech"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs>
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > <pool name="research"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > </allocations>
> > > > 
> > > > Note: we have tested with Hadoop Stream job.
> > > > 
> > > > Fair Scheduler Administration 
> > > > Pools
> > > > Pool
> > > > Running Jobs
> > > > Map Tasks
> > > > Reduce Tasks
> > > > Scheduling Mode
> > > > 
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > 
> > > > research
> > > > 1
> > > > 5
> > > > -
> > > > 90
> > > > 690.0
> > > > 5
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > tech
> > > > 3
> > > > 5
> > > > -
> > > > 1266
> > > > 690.0
> > > > 5
> > > > -
> > > > 24
> > > > 24.0
> > > > FAIR
> > > > 
> > > > default
> > > > 0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > 
> > > > 
> > > > Running Jobs
> > > > Submitted
> > > > JobID
> > > > User
> > > > Name
> > > > Pool
> > > > Priority
> > > > Map Tasks
> > > > Reduce Tasks
> > > > 
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > 
> > > > Jan 16, 08:51
> > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > tech
> > > > streamjob5335328828469969152.jar
> > > > 
> > > > 
> > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > 583
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 09:56
> > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > research
> > > > streamjob8832181817213433660.jar
> > > > 
> > > > 
> > > > 4175 / 9581
> > > > 90
> > > > 690.0
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:01
> > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > tech
> > > > streamjob8773848575543653055.jar
> > > > 
> > > > 
> > > > 1842 / 15484
> > > > 620
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:08
> > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > tech
> > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > 
> > > > 
> > > > 387 / 450
> > > > 63
> > > > 63.0
> > > > 1.0
> > > > 0 / 24
> > > > 24
> > > > 24.0
> > > > 1.0
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > -- 
> > > >  
> > > >  
> > > >  
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
I think you should do that, so that when the allocation is inconsistent with fair share, the tasks in the queue which occupies more beyond it's fair share will be killed, and the available slots would be assigned to the other one (assuming the weights of them are the same)

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 11:32 AM, Dhanasekaran Anbalagan wrote:

> HI Nan,
> 
> We have not enabled Fair Scheduler Preemption.
> 
> -Dhanasekaran. 
> 
> Did I learn something today? If not, I wasted it. 
> 
> On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)> wrote:
> > have you enabled task preemption? 
> > 
> > Best, 
> > 
> > -- 
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> > 
> > 
> > 
> > On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
> > 
> > > Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> > > 
> > > Sent from my iPhone
> > > 
> > > On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> > > 
> > > > HI Guys
> > > > 
> > > > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > > > Map Task Capacity = 1380
> > > > Reduce Task Capacity = 720 
> > > > 
> > > > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > > > please guide me guys.
> > > > 
> > > > <?xml version="1.0"?>
> > > > <allocations>
> > > > <pool name="tech"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs>
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > <pool name="research"> 
> > > >   <minMaps>5</minMaps> 
> > > >   <minReduces>5</minReduces> 
> > > >   <maxRunningJobs>30</maxRunningJobs> 
> > > >   <weight>1.0</weight> 
> > > > </pool>
> > > > </allocations>
> > > > 
> > > > Note: we have tested with Hadoop Stream job.
> > > > 
> > > > Fair Scheduler Administration 
> > > > Pools
> > > > Pool
> > > > Running Jobs
> > > > Map Tasks
> > > > Reduce Tasks
> > > > Scheduling Mode
> > > > 
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > Min Share
> > > > Max Share
> > > > Running
> > > > Fair Share
> > > > 
> > > > research
> > > > 1
> > > > 5
> > > > -
> > > > 90
> > > > 690.0
> > > > 5
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > tech
> > > > 3
> > > > 5
> > > > -
> > > > 1266
> > > > 690.0
> > > > 5
> > > > -
> > > > 24
> > > > 24.0
> > > > FAIR
> > > > 
> > > > default
> > > > 0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > 0
> > > > -
> > > > 0
> > > > 0.0
> > > > FAIR
> > > > 
> > > > 
> > > > 
> > > > Running Jobs
> > > > Submitted
> > > > JobID
> > > > User
> > > > Name
> > > > Pool
> > > > Priority
> > > > Map Tasks
> > > > Reduce Tasks
> > > > 
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > Finished
> > > > Running
> > > > Fair Share
> > > > Weight
> > > > 
> > > > Jan 16, 08:51
> > > > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > > > tech
> > > > streamjob5335328828469969152.jar
> > > > 
> > > > 
> > > > 30466 / 53724 (tel:30466%20%2F%2053724)
> > > > 583
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 09:56
> > > > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > > > research
> > > > streamjob8832181817213433660.jar
> > > > 
> > > > 
> > > > 4175 / 9581
> > > > 90
> > > > 690.0
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:01
> > > > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > > > tech
> > > > streamjob8773848575543653055.jar
> > > > 
> > > > 
> > > > 1842 / 15484
> > > > 620
> > > > 313.5
> > > > 1.0
> > > > 0 / 24
> > > > 0
> > > > 0.0
> > > > 1.0
> > > > 
> > > > Jan 16, 10:08
> > > > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > > > tech
> > > > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > > > 
> > > > 
> > > > 387 / 450
> > > > 63
> > > > 63.0
> > > > 1.0
> > > > 0 / 24
> > > > 24
> > > > 24.0
> > > > 1.0
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > -- 
> > > >  
> > > >  
> > > >  
> > 
> 


Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Nan,

We have not enabled Fair Scheduler Preemption.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:

>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Nan,

We have not enabled Fair Scheduler Preemption.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:

>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Nan,

We have not enabled Fair Scheduler Preemption.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:

>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Dhanasekaran Anbalagan <bu...@gmail.com>.
HI Nan,

We have not enabled Fair Scheduler Preemption.

-Dhanasekaran.

Did I learn something today? If not, I wasted it.


On Wed, Jan 16, 2013 at 11:21 AM, Nan Zhu <zh...@gmail.com> wrote:

>  have you enabled task preemption?
>
> Best,
>
> --
> Nan Zhu
> School of Computer Science,
> McGill University
>
>
> On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:
>
> Looks like weight for both pools is equal and all map slots are used.
> Therefore I don't believe anyone has priority for the next slots. Try
> setting research weight to 2. This should allow research to take slots as
> tech released them.
>
> Sent from my iPhone
>
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bu...@gmail.com>
> wrote:
>
>  HI Guys
>
> We configured fair scheduler with cdh4, Fair scheduler not work properly.
> Map Task Capacity = 1380
> Reduce Task Capacity = 720
>
> We create two users tech and research, we configured equal weight 1 But, I
> stared job in research user mapper will not allocated why?
> please guide me guys.
>
> <?xml version="1.0"?>
> <allocations>
> <pool name="tech">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> <pool name="research">
>   <minMaps>5</minMaps>
>   <minReduces>5</minReduces>
>   <maxRunningJobs>30</maxRunningJobs>
>   <weight>1.0</weight>
> </pool>
> </allocations>
>
> Note: we have tested with Hadoop Stream job.
>
> Fair Scheduler Administration Pools PoolRunning JobsMap TasksReduce TasksScheduling
> Mode Min ShareMax ShareRunningFair ShareMin ShareMax ShareRunningFair
> Share research15-90690.05-00.0FAIR tech35-1266690.05-2424.0FAIR default00-
> 00.00-00.0FAIR Running Jobs SubmittedJobIDUserNamePoolPriorityMap TasksReduce
> Tasks FinishedRunningFair ShareWeightFinishedRunningFair ShareWeight Jan
> 16, 08:51 job_201301071639_2118<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118>
> tech streamjob5335328828469969152.jar   30466 / 53724583313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 09:56 job_201301071639_2147<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147>
> research streamjob8832181817213433660.jar   4175 / 958190690.0 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:01 job_201301071639_2148<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148>
> tech streamjob8773848575543653055.jar   1842 / 15484620313.5 1.0 0 / 240
> 0.0 1.0 Jan 16, 10:08 job_201301071639_2155<http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155>
> tech counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle   387
> / 4506363.0 1.0 0 / 242424.0 1.0
>
> --
>
>
>
>
>
>

Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
have you enabled task preemption? 

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:

> Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> 
> Sent from my iPhone
> 
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> 
> > HI Guys
> > 
> > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > Map Task Capacity = 1380
> > Reduce Task Capacity = 720 
> > 
> > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > please guide me guys.
> > 
> > <?xml version="1.0"?>
> > <allocations>
> > <pool name="tech"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs>
> >   <weight>1.0</weight> 
> > </pool>
> > <pool name="research"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs> 
> >   <weight>1.0</weight> 
> > </pool>
> > </allocations>
> > 
> > Note: we have tested with Hadoop Stream job.
> > 
> > Fair Scheduler Administration 
> > Pools
> > Pool
> > Running Jobs
> > Map Tasks
> > Reduce Tasks
> > Scheduling Mode
> > 
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > 
> > research
> > 1
> > 5
> > -
> > 90
> > 690.0
> > 5
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > tech
> > 3
> > 5
> > -
> > 1266
> > 690.0
> > 5
> > -
> > 24
> > 24.0
> > FAIR
> > 
> > default
> > 0
> > 0
> > -
> > 0
> > 0.0
> > 0
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > 
> > 
> > Running Jobs
> > Submitted
> > JobID
> > User
> > Name
> > Pool
> > Priority
> > Map Tasks
> > Reduce Tasks
> > 
> > Finished
> > Running
> > Fair Share
> > Weight
> > Finished
> > Running
> > Fair Share
> > Weight
> > 
> > Jan 16, 08:51
> > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > tech
> > streamjob5335328828469969152.jar
> > 
> > 
> > 30466 / 53724
> > 583
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 09:56
> > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > research
> > streamjob8832181817213433660.jar
> > 
> > 
> > 4175 / 9581
> > 90
> > 690.0
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:01
> > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > tech
> > streamjob8773848575543653055.jar
> > 
> > 
> > 1842 / 15484
> > 620
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:08
> > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > tech
> > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > 
> > 
> > 387 / 450
> > 63
> > 63.0
> > 1.0
> > 0 / 24
> > 24
> > 24.0
> > 1.0
> > 
> > 
> > 
> > 
> > 
> > 
> > -- 
> >  
> >  
> >  


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
have you enabled task preemption? 

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:

> Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> 
> Sent from my iPhone
> 
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> 
> > HI Guys
> > 
> > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > Map Task Capacity = 1380
> > Reduce Task Capacity = 720 
> > 
> > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > please guide me guys.
> > 
> > <?xml version="1.0"?>
> > <allocations>
> > <pool name="tech"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs>
> >   <weight>1.0</weight> 
> > </pool>
> > <pool name="research"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs> 
> >   <weight>1.0</weight> 
> > </pool>
> > </allocations>
> > 
> > Note: we have tested with Hadoop Stream job.
> > 
> > Fair Scheduler Administration 
> > Pools
> > Pool
> > Running Jobs
> > Map Tasks
> > Reduce Tasks
> > Scheduling Mode
> > 
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > 
> > research
> > 1
> > 5
> > -
> > 90
> > 690.0
> > 5
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > tech
> > 3
> > 5
> > -
> > 1266
> > 690.0
> > 5
> > -
> > 24
> > 24.0
> > FAIR
> > 
> > default
> > 0
> > 0
> > -
> > 0
> > 0.0
> > 0
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > 
> > 
> > Running Jobs
> > Submitted
> > JobID
> > User
> > Name
> > Pool
> > Priority
> > Map Tasks
> > Reduce Tasks
> > 
> > Finished
> > Running
> > Fair Share
> > Weight
> > Finished
> > Running
> > Fair Share
> > Weight
> > 
> > Jan 16, 08:51
> > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > tech
> > streamjob5335328828469969152.jar
> > 
> > 
> > 30466 / 53724
> > 583
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 09:56
> > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > research
> > streamjob8832181817213433660.jar
> > 
> > 
> > 4175 / 9581
> > 90
> > 690.0
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:01
> > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > tech
> > streamjob8773848575543653055.jar
> > 
> > 
> > 1842 / 15484
> > 620
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:08
> > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > tech
> > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > 
> > 
> > 387 / 450
> > 63
> > 63.0
> > 1.0
> > 0 / 24
> > 24
> > 24.0
> > 1.0
> > 
> > 
> > 
> > 
> > 
> > 
> > -- 
> >  
> >  
> >  


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
have you enabled task preemption? 

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:

> Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> 
> Sent from my iPhone
> 
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> 
> > HI Guys
> > 
> > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > Map Task Capacity = 1380
> > Reduce Task Capacity = 720 
> > 
> > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > please guide me guys.
> > 
> > <?xml version="1.0"?>
> > <allocations>
> > <pool name="tech"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs>
> >   <weight>1.0</weight> 
> > </pool>
> > <pool name="research"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs> 
> >   <weight>1.0</weight> 
> > </pool>
> > </allocations>
> > 
> > Note: we have tested with Hadoop Stream job.
> > 
> > Fair Scheduler Administration 
> > Pools
> > Pool
> > Running Jobs
> > Map Tasks
> > Reduce Tasks
> > Scheduling Mode
> > 
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > 
> > research
> > 1
> > 5
> > -
> > 90
> > 690.0
> > 5
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > tech
> > 3
> > 5
> > -
> > 1266
> > 690.0
> > 5
> > -
> > 24
> > 24.0
> > FAIR
> > 
> > default
> > 0
> > 0
> > -
> > 0
> > 0.0
> > 0
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > 
> > 
> > Running Jobs
> > Submitted
> > JobID
> > User
> > Name
> > Pool
> > Priority
> > Map Tasks
> > Reduce Tasks
> > 
> > Finished
> > Running
> > Fair Share
> > Weight
> > Finished
> > Running
> > Fair Share
> > Weight
> > 
> > Jan 16, 08:51
> > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > tech
> > streamjob5335328828469969152.jar
> > 
> > 
> > 30466 / 53724
> > 583
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 09:56
> > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > research
> > streamjob8832181817213433660.jar
> > 
> > 
> > 4175 / 9581
> > 90
> > 690.0
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:01
> > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > tech
> > streamjob8773848575543653055.jar
> > 
> > 
> > 1842 / 15484
> > 620
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:08
> > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > tech
> > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > 
> > 
> > 387 / 450
> > 63
> > 63.0
> > 1.0
> > 0 / 24
> > 24
> > 24.0
> > 1.0
> > 
> > 
> > 
> > 
> > 
> > 
> > -- 
> >  
> >  
> >  


Re: Fair Scheduler is not Fair why?

Posted by Nan Zhu <zh...@gmail.com>.
have you enabled task preemption? 

Best, 

-- 
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, 16 January, 2013 at 10:45 AM, Justin Workman wrote:

> Looks like weight for both pools is equal and all map slots are used. Therefore I don't believe anyone has priority for the next slots. Try setting research weight to 2. This should allow research to take slots as tech released them. 
> 
> Sent from my iPhone
> 
> On Jan 16, 2013, at 8:26 AM, Dhanasekaran Anbalagan <bugcy013@gmail.com (mailto:bugcy013@gmail.com)> wrote:
> 
> > HI Guys
> > 
> > We configured fair scheduler with cdh4, Fair scheduler not work properly.
> > Map Task Capacity = 1380
> > Reduce Task Capacity = 720 
> > 
> > We create two users tech and research, we configured equal weight 1 But, I stared job in research user mapper will not allocated why? 
> > please guide me guys.
> > 
> > <?xml version="1.0"?>
> > <allocations>
> > <pool name="tech"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs>
> >   <weight>1.0</weight> 
> > </pool>
> > <pool name="research"> 
> >   <minMaps>5</minMaps> 
> >   <minReduces>5</minReduces> 
> >   <maxRunningJobs>30</maxRunningJobs> 
> >   <weight>1.0</weight> 
> > </pool>
> > </allocations>
> > 
> > Note: we have tested with Hadoop Stream job.
> > 
> > Fair Scheduler Administration 
> > Pools
> > Pool
> > Running Jobs
> > Map Tasks
> > Reduce Tasks
> > Scheduling Mode
> > 
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > Min Share
> > Max Share
> > Running
> > Fair Share
> > 
> > research
> > 1
> > 5
> > -
> > 90
> > 690.0
> > 5
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > tech
> > 3
> > 5
> > -
> > 1266
> > 690.0
> > 5
> > -
> > 24
> > 24.0
> > FAIR
> > 
> > default
> > 0
> > 0
> > -
> > 0
> > 0.0
> > 0
> > -
> > 0
> > 0.0
> > FAIR
> > 
> > 
> > 
> > Running Jobs
> > Submitted
> > JobID
> > User
> > Name
> > Pool
> > Priority
> > Map Tasks
> > Reduce Tasks
> > 
> > Finished
> > Running
> > Fair Share
> > Weight
> > Finished
> > Running
> > Fair Share
> > Weight
> > 
> > Jan 16, 08:51
> > job_201301071639_2118 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2118)
> > tech
> > streamjob5335328828469969152.jar
> > 
> > 
> > 30466 / 53724
> > 583
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 09:56
> > job_201301071639_2147 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2147)
> > research
> > streamjob8832181817213433660.jar
> > 
> > 
> > 4175 / 9581
> > 90
> > 690.0
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:01
> > job_201301071639_2148 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2148)
> > tech
> > streamjob8773848575543653055.jar
> > 
> > 
> > 1842 / 15484
> > 620
> > 313.5
> > 1.0
> > 0 / 24
> > 0
> > 0.0
> > 1.0
> > 
> > Jan 16, 10:08
> > job_201301071639_2155 (http://172.16.30.122:50030/jobdetails.jsp?jobid=job_201301071639_2155)
> > tech
> > counterfactualsim-prod.eagle-EagleDepthSignalDisabled-prod.eagle
> > 
> > 
> > 387 / 450
> > 63
> > 63.0
> > 1.0
> > 0 / 24
> > 24
> > 24.0
> > 1.0
> > 
> > 
> > 
> > 
> > 
> > 
> > -- 
> >  
> >  
> >