You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Mohit Anchlia <mo...@gmail.com> on 2015/10/04 21:36:44 UTC

Jobs Stuck

I have hadoop running on 1 node and trying to test a simple wordcount
example. However, the job is being ACCEPTED but never getting a resource. I
looked in the Scheduler UI and it seem to have all the resources available
for execution. Could somebody help with what else could be a problem?

  ...root.hdfs0.0% used

'root.hdfs' Queue Status
Used Resources: <memory:0, vCores:0>
Num Active Applications: 0
Num Pending Applications: 1
Min Resources: <memory:0, vCores:0>
Max Resources: <memory:1273, vCores:2>
Steady Fair Share: <memory:637, vCores:0>
Instantaneous Fair Share: <memory:1273, vCores:0>
Show  entriesSearch:
ID
User
Name
Application Type
Queue
Fair Share
StartTime
FinishTime
State
FinalStatus
Running Containers
Allocated CPU VCores
Allocated Memory MB
Progress
Tracking UI
application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
UNASSIGNED

RE: Jobs Stuck

Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Mohith,
Did not see your initial post, Seems like you are using FairScheduler and over there too, similar approach is there
"yarn.scheduler.increment-allocation-mb" and "yarn.scheduler.increment-allocation-vcores" which defaults to 1024 and 1
Please try to decrease the above configuration values and then try, if its value is default or of greater value than what you are submitting for MAP and REDUCE tasks.

+ Naga

________________________________
From: Naganarasimha Garla [naganarasimha.gr@gmail.com]
Sent: Monday, October 05, 2015 05:30
To: user@hadoop.apache.org
Subject: Re: Jobs Stuck

Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "yarn.scheduler.minimum-allocation-mb" and limited to "yarn.scheduler.maximum-allocation-mb" .

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>> wrote:
I changed my code to reduce the values but I still see that app is requiring 1.24GB. Does it only work when there is a xml file?


conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I just noticed that memory resources are 1273 but my application is showing a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I have hadoop running on 1 node and trying to test a simple wordcount example. However, the job is being ACCEPTED but never getting a resource. I looked in the Scheduler UI and it seem to have all the resources available for execution. Could somebody help with what else could be a problem?

  ...root.hdfs0.0% used

'root.hdfs' Queue Status
Used Resources: <memory:0, vCores:0>
Num Active Applications: 0
Num Pending Applications: 1
Min Resources: <memory:0, vCores:0>
Max Resources: <memory:1273, vCores:2>
Steady Fair Share: <memory:637, vCores:0>
Instantaneous Fair Share: <memory:1273, vCores:0>
Show  entriesSearch:
ID
User
Name
Application Type
Queue
Fair Share
StartTime
FinishTime
State
FinalStatus
Running Containers
Allocated CPU VCores
Allocated Memory MB
Progress
Tracking UI
application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
UNASSIGNED






RE: Jobs Stuck

Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Mohith,
Did not see your initial post, Seems like you are using FairScheduler and over there too, similar approach is there
"yarn.scheduler.increment-allocation-mb" and "yarn.scheduler.increment-allocation-vcores" which defaults to 1024 and 1
Please try to decrease the above configuration values and then try, if its value is default or of greater value than what you are submitting for MAP and REDUCE tasks.

+ Naga

________________________________
From: Naganarasimha Garla [naganarasimha.gr@gmail.com]
Sent: Monday, October 05, 2015 05:30
To: user@hadoop.apache.org
Subject: Re: Jobs Stuck

Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "yarn.scheduler.minimum-allocation-mb" and limited to "yarn.scheduler.maximum-allocation-mb" .

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>> wrote:
I changed my code to reduce the values but I still see that app is requiring 1.24GB. Does it only work when there is a xml file?


conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I just noticed that memory resources are 1273 but my application is showing a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I have hadoop running on 1 node and trying to test a simple wordcount example. However, the job is being ACCEPTED but never getting a resource. I looked in the Scheduler UI and it seem to have all the resources available for execution. Could somebody help with what else could be a problem?

  ...root.hdfs0.0% used

'root.hdfs' Queue Status
Used Resources: <memory:0, vCores:0>
Num Active Applications: 0
Num Pending Applications: 1
Min Resources: <memory:0, vCores:0>
Max Resources: <memory:1273, vCores:2>
Steady Fair Share: <memory:637, vCores:0>
Instantaneous Fair Share: <memory:1273, vCores:0>
Show  entriesSearch:
ID
User
Name
Application Type
Queue
Fair Share
StartTime
FinishTime
State
FinalStatus
Running Containers
Allocated CPU VCores
Allocated Memory MB
Progress
Tracking UI
application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
UNASSIGNED






RE: Jobs Stuck

Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Mohith,
Did not see your initial post, Seems like you are using FairScheduler and over there too, similar approach is there
"yarn.scheduler.increment-allocation-mb" and "yarn.scheduler.increment-allocation-vcores" which defaults to 1024 and 1
Please try to decrease the above configuration values and then try, if its value is default or of greater value than what you are submitting for MAP and REDUCE tasks.

+ Naga

________________________________
From: Naganarasimha Garla [naganarasimha.gr@gmail.com]
Sent: Monday, October 05, 2015 05:30
To: user@hadoop.apache.org
Subject: Re: Jobs Stuck

Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "yarn.scheduler.minimum-allocation-mb" and limited to "yarn.scheduler.maximum-allocation-mb" .

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>> wrote:
I changed my code to reduce the values but I still see that app is requiring 1.24GB. Does it only work when there is a xml file?


conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I just noticed that memory resources are 1273 but my application is showing a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I have hadoop running on 1 node and trying to test a simple wordcount example. However, the job is being ACCEPTED but never getting a resource. I looked in the Scheduler UI and it seem to have all the resources available for execution. Could somebody help with what else could be a problem?

  ...root.hdfs0.0% used

'root.hdfs' Queue Status
Used Resources: <memory:0, vCores:0>
Num Active Applications: 0
Num Pending Applications: 1
Min Resources: <memory:0, vCores:0>
Max Resources: <memory:1273, vCores:2>
Steady Fair Share: <memory:637, vCores:0>
Instantaneous Fair Share: <memory:1273, vCores:0>
Show  entriesSearch:
ID
User
Name
Application Type
Queue
Fair Share
StartTime
FinishTime
State
FinalStatus
Running Containers
Allocated CPU VCores
Allocated Memory MB
Progress
Tracking UI
application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
UNASSIGNED






RE: Jobs Stuck

Posted by "Naganarasimha G R (Naga)" <ga...@huawei.com>.
Hi Mohith,
Did not see your initial post, Seems like you are using FairScheduler and over there too, similar approach is there
"yarn.scheduler.increment-allocation-mb" and "yarn.scheduler.increment-allocation-vcores" which defaults to 1024 and 1
Please try to decrease the above configuration values and then try, if its value is default or of greater value than what you are submitting for MAP and REDUCE tasks.

+ Naga

________________________________
From: Naganarasimha Garla [naganarasimha.gr@gmail.com]
Sent: Monday, October 05, 2015 05:30
To: user@hadoop.apache.org
Subject: Re: Jobs Stuck

Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "yarn.scheduler.minimum-allocation-mb" and limited to "yarn.scheduler.maximum-allocation-mb" .

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>> wrote:
I changed my code to reduce the values but I still see that app is requiring 1.24GB. Does it only work when there is a xml file?


conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I just noticed that memory resources are 1273 but my application is showing a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>> wrote:
I have hadoop running on 1 node and trying to test a simple wordcount example. However, the job is being ACCEPTED but never getting a resource. I looked in the Scheduler UI and it seem to have all the resources available for execution. Could somebody help with what else could be a problem?

  ...root.hdfs0.0% used

'root.hdfs' Queue Status
Used Resources: <memory:0, vCores:0>
Num Active Applications: 0
Num Pending Applications: 1
Min Resources: <memory:0, vCores:0>
Max Resources: <memory:1273, vCores:2>
Steady Fair Share: <memory:637, vCores:0>
Instantaneous Fair Share: <memory:1273, vCores:0>
Show  entriesSearch:
ID
User
Name
Application Type
Queue
Fair Share
StartTime
FinishTime
State
FinalStatus
Running Containers
Allocated CPU VCores
Allocated Memory MB
Progress
Tracking UI
application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
UNASSIGNED






Re: Jobs Stuck

Posted by Naganarasimha Garla <na...@gmail.com>.
Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "
yarn.scheduler.minimum-allocation-mb" and limited to
"yarn.scheduler.maximum-allocation-mb"
.

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I changed my code to reduce the values but I still see that app is
> requiring 1.24GB. Does it only work when there is a xml file?
>
> conf.set("yarn.app.mapreduce.am.resource.mb", "1000");
>
> conf.set("mapreduce.map.memory.mb", "500");
>
> conf.set("mapreduce.reduce.memory.mb", "500");
>
> On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I just noticed that memory resources are 1273 but my application is
>> showing a memory of 1.24 GB. Is that a problem?
>>
>> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
>> wrote:
>>
>>> I have hadoop running on 1 node and trying to test a simple wordcount
>>> example. However, the job is being ACCEPTED but never getting a resource. I
>>> looked in the Scheduler UI and it seem to have all the resources available
>>> for execution. Could somebody help with what else could be a problem?
>>>
>>>   ...root.hdfs0.0% used
>>>
>>> 'root.hdfs' Queue Status
>>> Used Resources: <memory:0, vCores:0>
>>> Num Active Applications: 0
>>> Num Pending Applications: 1
>>> Min Resources: <memory:0, vCores:0>
>>> Max Resources: <memory:1273, vCores:2>
>>> Steady Fair Share: <memory:637, vCores:0>
>>> Instantaneous Fair Share: <memory:1273, vCores:0>
>>> Show  entriesSearch:
>>> ID
>>> User
>>> Name
>>> Application Type
>>> Queue
>>> Fair Share
>>> StartTime
>>> FinishTime
>>> State
>>> FinalStatus
>>> Running Containers
>>> Allocated CPU VCores
>>> Allocated Memory MB
>>> Progress
>>> Tracking UI
>>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>>> UNASSIGNED
>>>
>>>
>>>
>>
>

Re: Jobs Stuck

Posted by Naganarasimha Garla <na...@gmail.com>.
Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "
yarn.scheduler.minimum-allocation-mb" and limited to
"yarn.scheduler.maximum-allocation-mb"
.

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I changed my code to reduce the values but I still see that app is
> requiring 1.24GB. Does it only work when there is a xml file?
>
> conf.set("yarn.app.mapreduce.am.resource.mb", "1000");
>
> conf.set("mapreduce.map.memory.mb", "500");
>
> conf.set("mapreduce.reduce.memory.mb", "500");
>
> On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I just noticed that memory resources are 1273 but my application is
>> showing a memory of 1.24 GB. Is that a problem?
>>
>> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
>> wrote:
>>
>>> I have hadoop running on 1 node and trying to test a simple wordcount
>>> example. However, the job is being ACCEPTED but never getting a resource. I
>>> looked in the Scheduler UI and it seem to have all the resources available
>>> for execution. Could somebody help with what else could be a problem?
>>>
>>>   ...root.hdfs0.0% used
>>>
>>> 'root.hdfs' Queue Status
>>> Used Resources: <memory:0, vCores:0>
>>> Num Active Applications: 0
>>> Num Pending Applications: 1
>>> Min Resources: <memory:0, vCores:0>
>>> Max Resources: <memory:1273, vCores:2>
>>> Steady Fair Share: <memory:637, vCores:0>
>>> Instantaneous Fair Share: <memory:1273, vCores:0>
>>> Show  entriesSearch:
>>> ID
>>> User
>>> Name
>>> Application Type
>>> Queue
>>> Fair Share
>>> StartTime
>>> FinishTime
>>> State
>>> FinalStatus
>>> Running Containers
>>> Allocated CPU VCores
>>> Allocated Memory MB
>>> Progress
>>> Tracking UI
>>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>>> UNASSIGNED
>>>
>>>
>>>
>>
>

Re: Jobs Stuck

Posted by Naganarasimha Garla <na...@gmail.com>.
Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "
yarn.scheduler.minimum-allocation-mb" and limited to
"yarn.scheduler.maximum-allocation-mb"
.

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I changed my code to reduce the values but I still see that app is
> requiring 1.24GB. Does it only work when there is a xml file?
>
> conf.set("yarn.app.mapreduce.am.resource.mb", "1000");
>
> conf.set("mapreduce.map.memory.mb", "500");
>
> conf.set("mapreduce.reduce.memory.mb", "500");
>
> On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I just noticed that memory resources are 1273 but my application is
>> showing a memory of 1.24 GB. Is that a problem?
>>
>> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
>> wrote:
>>
>>> I have hadoop running on 1 node and trying to test a simple wordcount
>>> example. However, the job is being ACCEPTED but never getting a resource. I
>>> looked in the Scheduler UI and it seem to have all the resources available
>>> for execution. Could somebody help with what else could be a problem?
>>>
>>>   ...root.hdfs0.0% used
>>>
>>> 'root.hdfs' Queue Status
>>> Used Resources: <memory:0, vCores:0>
>>> Num Active Applications: 0
>>> Num Pending Applications: 1
>>> Min Resources: <memory:0, vCores:0>
>>> Max Resources: <memory:1273, vCores:2>
>>> Steady Fair Share: <memory:637, vCores:0>
>>> Instantaneous Fair Share: <memory:1273, vCores:0>
>>> Show  entriesSearch:
>>> ID
>>> User
>>> Name
>>> Application Type
>>> Queue
>>> Fair Share
>>> StartTime
>>> FinishTime
>>> State
>>> FinalStatus
>>> Running Containers
>>> Allocated CPU VCores
>>> Allocated Memory MB
>>> Progress
>>> Tracking UI
>>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>>> UNASSIGNED
>>>
>>>
>>>
>>
>

Re: Jobs Stuck

Posted by Naganarasimha Garla <na...@gmail.com>.
Hi Mohith,

Which version of Hadoop And Is it capacity scheduler?
If CS, then Memory configurations should be multiple of "
yarn.scheduler.minimum-allocation-mb" and limited to
"yarn.scheduler.maximum-allocation-mb"
.

+ Naga

On Mon, Oct 5, 2015 at 1:33 AM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I changed my code to reduce the values but I still see that app is
> requiring 1.24GB. Does it only work when there is a xml file?
>
> conf.set("yarn.app.mapreduce.am.resource.mb", "1000");
>
> conf.set("mapreduce.map.memory.mb", "500");
>
> conf.set("mapreduce.reduce.memory.mb", "500");
>
> On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I just noticed that memory resources are 1273 but my application is
>> showing a memory of 1.24 GB. Is that a problem?
>>
>> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
>> wrote:
>>
>>> I have hadoop running on 1 node and trying to test a simple wordcount
>>> example. However, the job is being ACCEPTED but never getting a resource. I
>>> looked in the Scheduler UI and it seem to have all the resources available
>>> for execution. Could somebody help with what else could be a problem?
>>>
>>>   ...root.hdfs0.0% used
>>>
>>> 'root.hdfs' Queue Status
>>> Used Resources: <memory:0, vCores:0>
>>> Num Active Applications: 0
>>> Num Pending Applications: 1
>>> Min Resources: <memory:0, vCores:0>
>>> Max Resources: <memory:1273, vCores:2>
>>> Steady Fair Share: <memory:637, vCores:0>
>>> Instantaneous Fair Share: <memory:1273, vCores:0>
>>> Show  entriesSearch:
>>> ID
>>> User
>>> Name
>>> Application Type
>>> Queue
>>> Fair Share
>>> StartTime
>>> FinishTime
>>> State
>>> FinalStatus
>>> Running Containers
>>> Allocated CPU VCores
>>> Allocated Memory MB
>>> Progress
>>> Tracking UI
>>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>>> UNASSIGNED
>>>
>>>
>>>
>>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I changed my code to reduce the values but I still see that app is
requiring 1.24GB. Does it only work when there is a xml file?

conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I just noticed that memory resources are 1273 but my application is
> showing a memory of 1.24 GB. Is that a problem?
>
> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I have hadoop running on 1 node and trying to test a simple wordcount
>> example. However, the job is being ACCEPTED but never getting a resource. I
>> looked in the Scheduler UI and it seem to have all the resources available
>> for execution. Could somebody help with what else could be a problem?
>>
>>   ...root.hdfs0.0% used
>>
>> 'root.hdfs' Queue Status
>> Used Resources: <memory:0, vCores:0>
>> Num Active Applications: 0
>> Num Pending Applications: 1
>> Min Resources: <memory:0, vCores:0>
>> Max Resources: <memory:1273, vCores:2>
>> Steady Fair Share: <memory:637, vCores:0>
>> Instantaneous Fair Share: <memory:1273, vCores:0>
>> Show  entriesSearch:
>> ID
>> User
>> Name
>> Application Type
>> Queue
>> Fair Share
>> StartTime
>> FinishTime
>> State
>> FinalStatus
>> Running Containers
>> Allocated CPU VCores
>> Allocated Memory MB
>> Progress
>> Tracking UI
>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>> UNASSIGNED
>>
>>
>>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I changed my code to reduce the values but I still see that app is
requiring 1.24GB. Does it only work when there is a xml file?

conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I just noticed that memory resources are 1273 but my application is
> showing a memory of 1.24 GB. Is that a problem?
>
> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I have hadoop running on 1 node and trying to test a simple wordcount
>> example. However, the job is being ACCEPTED but never getting a resource. I
>> looked in the Scheduler UI and it seem to have all the resources available
>> for execution. Could somebody help with what else could be a problem?
>>
>>   ...root.hdfs0.0% used
>>
>> 'root.hdfs' Queue Status
>> Used Resources: <memory:0, vCores:0>
>> Num Active Applications: 0
>> Num Pending Applications: 1
>> Min Resources: <memory:0, vCores:0>
>> Max Resources: <memory:1273, vCores:2>
>> Steady Fair Share: <memory:637, vCores:0>
>> Instantaneous Fair Share: <memory:1273, vCores:0>
>> Show  entriesSearch:
>> ID
>> User
>> Name
>> Application Type
>> Queue
>> Fair Share
>> StartTime
>> FinishTime
>> State
>> FinalStatus
>> Running Containers
>> Allocated CPU VCores
>> Allocated Memory MB
>> Progress
>> Tracking UI
>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>> UNASSIGNED
>>
>>
>>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I changed my code to reduce the values but I still see that app is
requiring 1.24GB. Does it only work when there is a xml file?

conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I just noticed that memory resources are 1273 but my application is
> showing a memory of 1.24 GB. Is that a problem?
>
> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I have hadoop running on 1 node and trying to test a simple wordcount
>> example. However, the job is being ACCEPTED but never getting a resource. I
>> looked in the Scheduler UI and it seem to have all the resources available
>> for execution. Could somebody help with what else could be a problem?
>>
>>   ...root.hdfs0.0% used
>>
>> 'root.hdfs' Queue Status
>> Used Resources: <memory:0, vCores:0>
>> Num Active Applications: 0
>> Num Pending Applications: 1
>> Min Resources: <memory:0, vCores:0>
>> Max Resources: <memory:1273, vCores:2>
>> Steady Fair Share: <memory:637, vCores:0>
>> Instantaneous Fair Share: <memory:1273, vCores:0>
>> Show  entriesSearch:
>> ID
>> User
>> Name
>> Application Type
>> Queue
>> Fair Share
>> StartTime
>> FinishTime
>> State
>> FinalStatus
>> Running Containers
>> Allocated CPU VCores
>> Allocated Memory MB
>> Progress
>> Tracking UI
>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>> UNASSIGNED
>>
>>
>>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I changed my code to reduce the values but I still see that app is
requiring 1.24GB. Does it only work when there is a xml file?

conf.set("yarn.app.mapreduce.am.resource.mb", "1000");

conf.set("mapreduce.map.memory.mb", "500");

conf.set("mapreduce.reduce.memory.mb", "500");

On Sun, Oct 4, 2015 at 12:48 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I just noticed that memory resources are 1273 but my application is
> showing a memory of 1.24 GB. Is that a problem?
>
> On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
> wrote:
>
>> I have hadoop running on 1 node and trying to test a simple wordcount
>> example. However, the job is being ACCEPTED but never getting a resource. I
>> looked in the Scheduler UI and it seem to have all the resources available
>> for execution. Could somebody help with what else could be a problem?
>>
>>   ...root.hdfs0.0% used
>>
>> 'root.hdfs' Queue Status
>> Used Resources: <memory:0, vCores:0>
>> Num Active Applications: 0
>> Num Pending Applications: 1
>> Min Resources: <memory:0, vCores:0>
>> Max Resources: <memory:1273, vCores:2>
>> Steady Fair Share: <memory:637, vCores:0>
>> Instantaneous Fair Share: <memory:1273, vCores:0>
>> Show  entriesSearch:
>> ID
>> User
>> Name
>> Application Type
>> Queue
>> Fair Share
>> StartTime
>> FinishTime
>> State
>> FinalStatus
>> Running Containers
>> Allocated CPU VCores
>> Allocated Memory MB
>> Progress
>> Tracking UI
>> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
>> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
>> UNASSIGNED
>>
>>
>>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I just noticed that memory resources are 1273 but my application is showing
a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I have hadoop running on 1 node and trying to test a simple wordcount
> example. However, the job is being ACCEPTED but never getting a resource. I
> looked in the Scheduler UI and it seem to have all the resources available
> for execution. Could somebody help with what else could be a problem?
>
>   ...root.hdfs0.0% used
>
> 'root.hdfs' Queue Status
> Used Resources: <memory:0, vCores:0>
> Num Active Applications: 0
> Num Pending Applications: 1
> Min Resources: <memory:0, vCores:0>
> Max Resources: <memory:1273, vCores:2>
> Steady Fair Share: <memory:637, vCores:0>
> Instantaneous Fair Share: <memory:1273, vCores:0>
> Show  entriesSearch:
> ID
> User
> Name
> Application Type
> Queue
> Fair Share
> StartTime
> FinishTime
> State
> FinalStatus
> Running Containers
> Allocated CPU VCores
> Allocated Memory MB
> Progress
> Tracking UI
> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
> UNASSIGNED
>
>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I just noticed that memory resources are 1273 but my application is showing
a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I have hadoop running on 1 node and trying to test a simple wordcount
> example. However, the job is being ACCEPTED but never getting a resource. I
> looked in the Scheduler UI and it seem to have all the resources available
> for execution. Could somebody help with what else could be a problem?
>
>   ...root.hdfs0.0% used
>
> 'root.hdfs' Queue Status
> Used Resources: <memory:0, vCores:0>
> Num Active Applications: 0
> Num Pending Applications: 1
> Min Resources: <memory:0, vCores:0>
> Max Resources: <memory:1273, vCores:2>
> Steady Fair Share: <memory:637, vCores:0>
> Instantaneous Fair Share: <memory:1273, vCores:0>
> Show  entriesSearch:
> ID
> User
> Name
> Application Type
> Queue
> Fair Share
> StartTime
> FinishTime
> State
> FinalStatus
> Running Containers
> Allocated CPU VCores
> Allocated Memory MB
> Progress
> Tracking UI
> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
> UNASSIGNED
>
>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I just noticed that memory resources are 1273 but my application is showing
a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I have hadoop running on 1 node and trying to test a simple wordcount
> example. However, the job is being ACCEPTED but never getting a resource. I
> looked in the Scheduler UI and it seem to have all the resources available
> for execution. Could somebody help with what else could be a problem?
>
>   ...root.hdfs0.0% used
>
> 'root.hdfs' Queue Status
> Used Resources: <memory:0, vCores:0>
> Num Active Applications: 0
> Num Pending Applications: 1
> Min Resources: <memory:0, vCores:0>
> Max Resources: <memory:1273, vCores:2>
> Steady Fair Share: <memory:637, vCores:0>
> Instantaneous Fair Share: <memory:1273, vCores:0>
> Show  entriesSearch:
> ID
> User
> Name
> Application Type
> Queue
> Fair Share
> StartTime
> FinishTime
> State
> FinalStatus
> Running Containers
> Allocated CPU VCores
> Allocated Memory MB
> Progress
> Tracking UI
> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
> UNASSIGNED
>
>
>

Re: Jobs Stuck

Posted by Mohit Anchlia <mo...@gmail.com>.
I just noticed that memory resources are 1273 but my application is showing
a memory of 1.24 GB. Is that a problem?

On Sun, Oct 4, 2015 at 12:36 PM, Mohit Anchlia <mo...@gmail.com>
wrote:

> I have hadoop running on 1 node and trying to test a simple wordcount
> example. However, the job is being ACCEPTED but never getting a resource. I
> looked in the Scheduler UI and it seem to have all the resources available
> for execution. Could somebody help with what else could be a problem?
>
>   ...root.hdfs0.0% used
>
> 'root.hdfs' Queue Status
> Used Resources: <memory:0, vCores:0>
> Num Active Applications: 0
> Num Pending Applications: 1
> Min Resources: <memory:0, vCores:0>
> Max Resources: <memory:1273, vCores:2>
> Steady Fair Share: <memory:637, vCores:0>
> Instantaneous Fair Share: <memory:1273, vCores:0>
> Show  entriesSearch:
> ID
> User
> Name
> Application Type
> Queue
> Fair Share
> StartTime
> FinishTime
> State
> FinalStatus
> Running Containers
> Allocated CPU VCores
> Allocated Memory MB
> Progress
> Tracking UI
> application_1443983171281_0004 hdfs wordcount MAPREDUCE root.hdfs 1273 Sun
> Oct 4 12:21:42 -0700 2015 N/A ACCEPTED UNDEFINED 0 0 0
> UNASSIGNED
>
>
>