You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Lanati, Matteo" <Ma...@lrz.de> on 2013/06/01 15:57:11 UTC

(Unknown)

Hi all,

I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is 

2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807

The logfile is attached, together with the configuration files. The version I'm using is

Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar

If I run the default configuration (i.e. no securty), then the job succeeds.

Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?

Thanks in advance.

Matteo



>Which version of Hadoop are you using. A quick search shows me a bug
>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>similar symptoms. However, that was fixed a long while ago.
>
>
>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>reduno1985@googlemail.com> wrote:
>
>> This the content of the jobtracker log file :
>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000000 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000001 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000002 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000003 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000004 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000005 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000006 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> reduce tasks.
>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> task_201303231139_0001_m_000008, for tracker
>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>> 'attempt_201303231139_0001_m_000008_0' has completed
>> task_201303231139_0001_m_000008 successfully.
>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>> expect map to take 1317624576693539401
>>
>>
>> The value in we excpect map to take is too huge   1317624576693539401
>> bytes  !!!!!!!
>>
>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> reduno1985@googlemail.com> wrote:
>>
>>> The estimated value that the hadoop compute is too huge for the simple
>>> example that i am running .
>>>
>>> ---------- Forwarded message ----------
>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>> Subject: Re: About running a simple wordcount mapreduce
>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>
>>>
>>> This the output that I get I am running two machines  as you can see  do
>>> u see anything suspicious ?
>>> Configured Capacity: 21145698304 (19.69 GB)
>>> Present Capacity: 17615499264 (16.41 GB)
>>> DFS Remaining: 17615441920 (16.41 GB)
>>> DFS Used: 57344 (56 KB)
>>> DFS Used%: 0%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 2 (2 total, 0 dead)
>>>
>>> Name: 11.1.0.6:50010
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765019648 (1.64 GB)
>>> DFS Remaining: 8807800832(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.31%
>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>
>>>
>>> Name: 11.1.0.3:50010
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765179392 (1.64 GB)
>>> DFS Remaining: 8807641088(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.3%
>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>
>>>
>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>> ashettia@hortonworks.com> wrote:
>>>
>>>> Hi Redwane,
>>>>
>>>> Please run the following command as hdfs user on any datanode. The
>>>> output will be something like this. Hope this helps
>>>>
>>>> hadoop dfsadmin -report
>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>> Present Capacity: 70375292928 (65.54 GB)
>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>> DFS Used: 480129024 (457.89 MB)
>>>> DFS Used%: 0.68%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> Thanks
>>>> -Abdelrahman
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>
>>>>>
>>>>> I have my hosts running on openstack virtual machine instances each
>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>> the hdfs without web ui .
>>>>>
>>>>>
>>>>> Sent from Samsung Mobile
>>>>>
>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>> Check web ui how much space you have on hdfs???
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>> ashettia@hortonworks.com> wrote:
>>>>>
>>>>> Hi Redwane ,
>>>>>
>>>>> It is possible that the hosts which are running tasks are do not have
>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>> reduno1985@googlemail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>
>>>>>>
>>>>>> Hi
>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>> The jobtracker log file shows the following warning:
>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>take
>>>>>> 1317624576693539401
>>>>>>
>>>>>> Please help me ,
>>>>>> Best Regards,
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>


Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Shahab,

thanks for the answer.
I'm submitting the job as "super-user" just to avoid permission issues. In fact I can wirte/delete files. Moreover, I don't have an exception regarding security/permissions but insufficient resources because of an incorrect estimation (9223372036854775807 bytes expected).
Is it possible that some security features trigger something in the code?
Best,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Shahab Yunus [shahab.yunus@gmail.com]
Sent: 01 June 2013 16:04
To: user@hadoop.apache.org
Subject: Re:

It seems to me that as it is failing when you try to run with security turned on, then in those cases data cannot be written to the disk due to permissions (as you have security turned on) and when you run it without security then possible no such checks are performed and you can write data? I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de>> wrote:
Hi all,

I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is

2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807

The logfile is attached, together with the configuration files. The version I'm using is

Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar

If I run the default configuration (i.e. no securty), then the job succeeds.

Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?

Thanks in advance.

Matteo



>Which version of Hadoop are you using. A quick search shows me a bug
>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>similar symptoms. However, that was fixed a long while ago.
>
>
>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>
>> This the content of the jobtracker log file :
>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000000 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000001 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000002 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000003 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000004 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000005 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000006 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> reduce tasks.
>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> task_201303231139_0001_m_000008, for tracker
>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879<http://127.0.0.1:44879>'
>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>> 'attempt_201303231139_0001_m_000008_0' has completed
>> task_201303231139_0001_m_000008 successfully.
>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>> expect map to take 1317624576693539401
>>
>>
>> The value in we excpect map to take is too huge   1317624576693539401
>> bytes  !!!!!!!
>>
>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>
>>> The estimated value that the hadoop compute is too huge for the simple
>>> example that i am running .
>>>
>>> ---------- Forwarded message ----------
>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>> Subject: Re: About running a simple wordcount mapreduce
>>> To: Abdelrahman Shettia <as...@hortonworks.com>>
>>> Cc: user@hadoop.apache.org<ma...@hadoop.apache.org>, reduno1985 <re...@gmail.com>>
>>>
>>>
>>> This the output that I get I am running two machines  as you can see  do
>>> u see anything suspicious ?
>>> Configured Capacity: 21145698304 (19.69 GB)
>>> Present Capacity: 17615499264 (16.41 GB)
>>> DFS Remaining: 17615441920 (16.41 GB)
>>> DFS Used: 57344 (56 KB)
>>> DFS Used%: 0%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 2 (2 total, 0 dead)
>>>
>>> Name: 11.1.0.6:50010<http://11.1.0.6:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765019648 (1.64 GB)
>>> DFS Remaining: 8807800832(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.31%
>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>
>>>
>>> Name: 11.1.0.3:50010<http://11.1.0.3:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765179392 (1.64 GB)
>>> DFS Remaining: 8807641088(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.3%
>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>
>>>
>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>
>>>> Hi Redwane,
>>>>
>>>> Please run the following command as hdfs user on any datanode. The
>>>> output will be something like this. Hope this helps
>>>>
>>>> hadoop dfsadmin -report
>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>> Present Capacity: 70375292928 (65.54 GB)
>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>> DFS Used: 480129024 (457.89 MB)
>>>> DFS Used%: 0.68%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> Thanks
>>>> -Abdelrahman
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>>wrote:
>>>>
>>>>>
>>>>> I have my hosts running on openstack virtual machine instances each
>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>> the hdfs without web ui .
>>>>>
>>>>>
>>>>> Sent from Samsung Mobile
>>>>>
>>>>> Serge Blazhievsky <ha...@gmail.com>> wrote:
>>>>> Check web ui how much space you have on hdfs???
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>>>
>>>>> Hi Redwane ,
>>>>>
>>>>> It is possible that the hosts which are running tasks are do not have
>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>> To: mapreduce-issues@hadoop.apache.org<ma...@hadoop.apache.org>
>>>>>>
>>>>>>
>>>>>> Hi
>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>> The jobtracker log file shows the following warning:
>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>take
>>>>>> 1317624576693539401
>>>>>>
>>>>>> Please help me ,
>>>>>> Best Regards,
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>


Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724<tel:%2B49%2089%2035831%208724>

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Shahab,

thanks for the answer.
I'm submitting the job as "super-user" just to avoid permission issues. In fact I can wirte/delete files. Moreover, I don't have an exception regarding security/permissions but insufficient resources because of an incorrect estimation (9223372036854775807 bytes expected).
Is it possible that some security features trigger something in the code?
Best,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Shahab Yunus [shahab.yunus@gmail.com]
Sent: 01 June 2013 16:04
To: user@hadoop.apache.org
Subject: Re:

It seems to me that as it is failing when you try to run with security turned on, then in those cases data cannot be written to the disk due to permissions (as you have security turned on) and when you run it without security then possible no such checks are performed and you can write data? I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de>> wrote:
Hi all,

I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is

2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807

The logfile is attached, together with the configuration files. The version I'm using is

Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar

If I run the default configuration (i.e. no securty), then the job succeeds.

Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?

Thanks in advance.

Matteo



>Which version of Hadoop are you using. A quick search shows me a bug
>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>similar symptoms. However, that was fixed a long while ago.
>
>
>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>
>> This the content of the jobtracker log file :
>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000000 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000001 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000002 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000003 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000004 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000005 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000006 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> reduce tasks.
>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> task_201303231139_0001_m_000008, for tracker
>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879<http://127.0.0.1:44879>'
>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>> 'attempt_201303231139_0001_m_000008_0' has completed
>> task_201303231139_0001_m_000008 successfully.
>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>> expect map to take 1317624576693539401
>>
>>
>> The value in we excpect map to take is too huge   1317624576693539401
>> bytes  !!!!!!!
>>
>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>
>>> The estimated value that the hadoop compute is too huge for the simple
>>> example that i am running .
>>>
>>> ---------- Forwarded message ----------
>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>> Subject: Re: About running a simple wordcount mapreduce
>>> To: Abdelrahman Shettia <as...@hortonworks.com>>
>>> Cc: user@hadoop.apache.org<ma...@hadoop.apache.org>, reduno1985 <re...@gmail.com>>
>>>
>>>
>>> This the output that I get I am running two machines  as you can see  do
>>> u see anything suspicious ?
>>> Configured Capacity: 21145698304 (19.69 GB)
>>> Present Capacity: 17615499264 (16.41 GB)
>>> DFS Remaining: 17615441920 (16.41 GB)
>>> DFS Used: 57344 (56 KB)
>>> DFS Used%: 0%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 2 (2 total, 0 dead)
>>>
>>> Name: 11.1.0.6:50010<http://11.1.0.6:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765019648 (1.64 GB)
>>> DFS Remaining: 8807800832(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.31%
>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>
>>>
>>> Name: 11.1.0.3:50010<http://11.1.0.3:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765179392 (1.64 GB)
>>> DFS Remaining: 8807641088(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.3%
>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>
>>>
>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>
>>>> Hi Redwane,
>>>>
>>>> Please run the following command as hdfs user on any datanode. The
>>>> output will be something like this. Hope this helps
>>>>
>>>> hadoop dfsadmin -report
>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>> Present Capacity: 70375292928 (65.54 GB)
>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>> DFS Used: 480129024 (457.89 MB)
>>>> DFS Used%: 0.68%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> Thanks
>>>> -Abdelrahman
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>>wrote:
>>>>
>>>>>
>>>>> I have my hosts running on openstack virtual machine instances each
>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>> the hdfs without web ui .
>>>>>
>>>>>
>>>>> Sent from Samsung Mobile
>>>>>
>>>>> Serge Blazhievsky <ha...@gmail.com>> wrote:
>>>>> Check web ui how much space you have on hdfs???
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>>>
>>>>> Hi Redwane ,
>>>>>
>>>>> It is possible that the hosts which are running tasks are do not have
>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>> To: mapreduce-issues@hadoop.apache.org<ma...@hadoop.apache.org>
>>>>>>
>>>>>>
>>>>>> Hi
>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>> The jobtracker log file shows the following warning:
>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>take
>>>>>> 1317624576693539401
>>>>>>
>>>>>> Please help me ,
>>>>>> Best Regards,
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>


Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724<tel:%2B49%2089%2035831%208724>

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Shahab,

thanks for the answer.
I'm submitting the job as "super-user" just to avoid permission issues. In fact I can wirte/delete files. Moreover, I don't have an exception regarding security/permissions but insufficient resources because of an incorrect estimation (9223372036854775807 bytes expected).
Is it possible that some security features trigger something in the code?
Best,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Shahab Yunus [shahab.yunus@gmail.com]
Sent: 01 June 2013 16:04
To: user@hadoop.apache.org
Subject: Re:

It seems to me that as it is failing when you try to run with security turned on, then in those cases data cannot be written to the disk due to permissions (as you have security turned on) and when you run it without security then possible no such checks are performed and you can write data? I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de>> wrote:
Hi all,

I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is

2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807

The logfile is attached, together with the configuration files. The version I'm using is

Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar

If I run the default configuration (i.e. no securty), then the job succeeds.

Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?

Thanks in advance.

Matteo



>Which version of Hadoop are you using. A quick search shows me a bug
>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>similar symptoms. However, that was fixed a long while ago.
>
>
>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>
>> This the content of the jobtracker log file :
>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000000 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000001 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000002 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000003 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000004 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000005 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000006 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> reduce tasks.
>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> task_201303231139_0001_m_000008, for tracker
>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879<http://127.0.0.1:44879>'
>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>> 'attempt_201303231139_0001_m_000008_0' has completed
>> task_201303231139_0001_m_000008 successfully.
>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>> expect map to take 1317624576693539401
>>
>>
>> The value in we excpect map to take is too huge   1317624576693539401
>> bytes  !!!!!!!
>>
>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>
>>> The estimated value that the hadoop compute is too huge for the simple
>>> example that i am running .
>>>
>>> ---------- Forwarded message ----------
>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>> Subject: Re: About running a simple wordcount mapreduce
>>> To: Abdelrahman Shettia <as...@hortonworks.com>>
>>> Cc: user@hadoop.apache.org<ma...@hadoop.apache.org>, reduno1985 <re...@gmail.com>>
>>>
>>>
>>> This the output that I get I am running two machines  as you can see  do
>>> u see anything suspicious ?
>>> Configured Capacity: 21145698304 (19.69 GB)
>>> Present Capacity: 17615499264 (16.41 GB)
>>> DFS Remaining: 17615441920 (16.41 GB)
>>> DFS Used: 57344 (56 KB)
>>> DFS Used%: 0%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 2 (2 total, 0 dead)
>>>
>>> Name: 11.1.0.6:50010<http://11.1.0.6:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765019648 (1.64 GB)
>>> DFS Remaining: 8807800832(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.31%
>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>
>>>
>>> Name: 11.1.0.3:50010<http://11.1.0.3:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765179392 (1.64 GB)
>>> DFS Remaining: 8807641088(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.3%
>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>
>>>
>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>
>>>> Hi Redwane,
>>>>
>>>> Please run the following command as hdfs user on any datanode. The
>>>> output will be something like this. Hope this helps
>>>>
>>>> hadoop dfsadmin -report
>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>> Present Capacity: 70375292928 (65.54 GB)
>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>> DFS Used: 480129024 (457.89 MB)
>>>> DFS Used%: 0.68%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> Thanks
>>>> -Abdelrahman
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>>wrote:
>>>>
>>>>>
>>>>> I have my hosts running on openstack virtual machine instances each
>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>> the hdfs without web ui .
>>>>>
>>>>>
>>>>> Sent from Samsung Mobile
>>>>>
>>>>> Serge Blazhievsky <ha...@gmail.com>> wrote:
>>>>> Check web ui how much space you have on hdfs???
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>>>
>>>>> Hi Redwane ,
>>>>>
>>>>> It is possible that the hosts which are running tasks are do not have
>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>> To: mapreduce-issues@hadoop.apache.org<ma...@hadoop.apache.org>
>>>>>>
>>>>>>
>>>>>> Hi
>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>> The jobtracker log file shows the following warning:
>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>take
>>>>>> 1317624576693539401
>>>>>>
>>>>>> Please help me ,
>>>>>> Best Regards,
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>


Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724<tel:%2B49%2089%2035831%208724>

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Shahab,

thanks for the answer.
I'm submitting the job as "super-user" just to avoid permission issues. In fact I can wirte/delete files. Moreover, I don't have an exception regarding security/permissions but insufficient resources because of an incorrect estimation (9223372036854775807 bytes expected).
Is it possible that some security features trigger something in the code?
Best,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Shahab Yunus [shahab.yunus@gmail.com]
Sent: 01 June 2013 16:04
To: user@hadoop.apache.org
Subject: Re:

It seems to me that as it is failing when you try to run with security turned on, then in those cases data cannot be written to the disk due to permissions (as you have security turned on) and when you run it without security then possible no such checks are performed and you can write data? I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de>> wrote:
Hi all,

I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is

2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807

The logfile is attached, together with the configuration files. The version I'm using is

Hadoop 1.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar

If I run the default configuration (i.e. no securty), then the job succeeds.

Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?

Thanks in advance.

Matteo



>Which version of Hadoop are you using. A quick search shows me a bug
>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>similar symptoms. However, that was fixed a long while ago.
>
>
>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>
>> This the content of the jobtracker log file :
>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000000 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000001 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000002 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000003 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000004 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000005 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> tip:task_201303231139_0001_m_000006 has split on
>> node:/default-rack/hadoop0.novalocal
>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> reduce tasks.
>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> task_201303231139_0001_m_000008, for tracker
>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879<http://127.0.0.1:44879>'
>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>> 'attempt_201303231139_0001_m_000008_0' has completed
>> task_201303231139_0001_m_000008 successfully.
>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>> expect map to take 1317624576693539401
>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>> expect map to take 1317624576693539401
>>
>>
>> The value in we excpect map to take is too huge   1317624576693539401
>> bytes  !!!!!!!
>>
>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>
>>> The estimated value that the hadoop compute is too huge for the simple
>>> example that i am running .
>>>
>>> ---------- Forwarded message ----------
>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>> Subject: Re: About running a simple wordcount mapreduce
>>> To: Abdelrahman Shettia <as...@hortonworks.com>>
>>> Cc: user@hadoop.apache.org<ma...@hadoop.apache.org>, reduno1985 <re...@gmail.com>>
>>>
>>>
>>> This the output that I get I am running two machines  as you can see  do
>>> u see anything suspicious ?
>>> Configured Capacity: 21145698304 (19.69 GB)
>>> Present Capacity: 17615499264 (16.41 GB)
>>> DFS Remaining: 17615441920 (16.41 GB)
>>> DFS Used: 57344 (56 KB)
>>> DFS Used%: 0%
>>> Under replicated blocks: 0
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 2 (2 total, 0 dead)
>>>
>>> Name: 11.1.0.6:50010<http://11.1.0.6:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765019648 (1.64 GB)
>>> DFS Remaining: 8807800832(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.31%
>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>
>>>
>>> Name: 11.1.0.3:50010<http://11.1.0.3:50010>
>>> Decommission Status : Normal
>>> Configured Capacity: 10572849152 (9.85 GB)
>>> DFS Used: 28672 (28 KB)
>>> Non DFS Used: 1765179392 (1.64 GB)
>>> DFS Remaining: 8807641088(8.2 GB)
>>> DFS Used%: 0%
>>> DFS Remaining%: 83.3%
>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>
>>>
>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>
>>>> Hi Redwane,
>>>>
>>>> Please run the following command as hdfs user on any datanode. The
>>>> output will be something like this. Hope this helps
>>>>
>>>> hadoop dfsadmin -report
>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>> Present Capacity: 70375292928 (65.54 GB)
>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>> DFS Used: 480129024 (457.89 MB)
>>>> DFS Used%: 0.68%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> Thanks
>>>> -Abdelrahman
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>>wrote:
>>>>
>>>>>
>>>>> I have my hosts running on openstack virtual machine instances each
>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>> the hdfs without web ui .
>>>>>
>>>>>
>>>>> Sent from Samsung Mobile
>>>>>
>>>>> Serge Blazhievsky <ha...@gmail.com>> wrote:
>>>>> Check web ui how much space you have on hdfs???
>>>>>
>>>>> Sent from my iPhone
>>>>>
>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>> ashettia@hortonworks.com<ma...@hortonworks.com>> wrote:
>>>>>
>>>>> Hi Redwane ,
>>>>>
>>>>> It is possible that the hosts which are running tasks are do not have
>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>> reduno1985@googlemail.com<ma...@googlemail.com>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> ---------- Forwarded message ----------
>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>>
>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>> To: mapreduce-issues@hadoop.apache.org<ma...@hadoop.apache.org>
>>>>>>
>>>>>>
>>>>>> Hi
>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>> The jobtracker log file shows the following warning:
>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>take
>>>>>> 1317624576693539401
>>>>>>
>>>>>> Please help me ,
>>>>>> Best Regards,
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>>
>>


Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724<tel:%2B49%2089%2035831%208724>

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
It seems to me that as it is failing when you try to run with security
turned on, then in those cases data cannot be written to the disk due to
permissions (as you have security turned on) and when you run it without
security then possible no such checks are performed and you can write data?
I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi all,
>
> I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The
> version I'm using is
>
> Hadoop 1.2.0
> Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job
> succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
> >Which version of Hadoop are you using. A quick search shows me a bug
> >https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >similar symptoms. However, that was fixed a long while ago.
> >
> >
> >On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >reduno1985@googlemail.com> wrote:
> >
> >> This the content of the jobtracker log file :
> >> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000000 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000001 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000002 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000003 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000004 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000005 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000006 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
> >> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> reduce tasks.
> >> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
> >> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> task_201303231139_0001_m_000008, for tracker
> >> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >> 'attempt_201303231139_0001_m_000008_0' has completed
> >> task_201303231139_0001_m_000008 successfully.
> >> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >>
> >>
> >> The value in we excpect map to take is too huge   1317624576693539401
> >> bytes  !!!!!!!
> >>
> >> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> reduno1985@googlemail.com> wrote:
> >>
> >>> The estimated value that the hadoop compute is too huge for the simple
> >>> example that i am running .
> >>>
> >>> ---------- Forwarded message ----------
> >>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>> Subject: Re: About running a simple wordcount mapreduce
> >>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>
> >>>
> >>> This the output that I get I am running two machines  as you can see
>  do
> >>> u see anything suspicious ?
> >>> Configured Capacity: 21145698304 (19.69 GB)
> >>> Present Capacity: 17615499264 (16.41 GB)
> >>> DFS Remaining: 17615441920 (16.41 GB)
> >>> DFS Used: 57344 (56 KB)
> >>> DFS Used%: 0%
> >>> Under replicated blocks: 0
> >>> Blocks with corrupt replicas: 0
> >>> Missing blocks: 0
> >>>
> >>> -------------------------------------------------
> >>> Datanodes available: 2 (2 total, 0 dead)
> >>>
> >>> Name: 11.1.0.6:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765019648 (1.64 GB)
> >>> DFS Remaining: 8807800832(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.31%
> >>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>
> >>>
> >>> Name: 11.1.0.3:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765179392 (1.64 GB)
> >>> DFS Remaining: 8807641088(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.3%
> >>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>
> >>>
> >>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>> ashettia@hortonworks.com> wrote:
> >>>
> >>>> Hi Redwane,
> >>>>
> >>>> Please run the following command as hdfs user on any datanode. The
> >>>> output will be something like this. Hope this helps
> >>>>
> >>>> hadoop dfsadmin -report
> >>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>> Present Capacity: 70375292928 (65.54 GB)
> >>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>> DFS Used: 480129024 (457.89 MB)
> >>>> DFS Used%: 0.68%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> Thanks
> >>>> -Abdelrahman
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>
> >>>>>
> >>>>> I have my hosts running on openstack virtual machine instances each
> >>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>> the hdfs without web ui .
> >>>>>
> >>>>>
> >>>>> Sent from Samsung Mobile
> >>>>>
> >>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>> Check web ui how much space you have on hdfs???
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> >>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>> ashettia@hortonworks.com> wrote:
> >>>>>
> >>>>> Hi Redwane ,
> >>>>>
> >>>>> It is possible that the hosts which are running tasks are do not have
> >>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>> reduno1985@googlemail.com> wrote:
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> ---------- Forwarded message ----------
> >>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>
> >>>>>>
> >>>>>> Hi
> >>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>> The jobtracker log file shows the following warning:
> >>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map
> to
> >take
> >>>>>> 1317624576693539401
> >>>>>>
> >>>>>> Please help me ,
> >>>>>> Best Regards,
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Harsh,

thanks for the quick investigation.
This seems to fit my case: the job is just submitted but stuck at 0%.
Bye,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Harsh J [harsh@cloudera.com]
Sent: 01 June 2013 17:50
To: <us...@hadoop.apache.org>
Subject: Re:

Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J

Re:

Posted by Azuryy Yu <az...@gmail.com>.
just add more, continue the above thread:

  protected synchronized long getEstimatedTotalMapOutputSize()  {
    if(completedMapsUpdates < threshholdToUse) {
      return 0;
    } else {
      long inputSize = job.getInputLength() + job.desiredMaps();
      //add desiredMaps() so that randomwriter case doesn't blow up
      //the multiplication might lead to overflow, casting it with
      //double prevents it
      long estimate = Math.round(((double)inputSize *
          completedMapsOutputSize * 2.0)/completedMapsInputSize);
      if (LOG.isDebugEnabled()) {
        LOG.debug("estimate total map output will be " + estimate);
      }
      return estimate;
    }
  }


On Sun, Jun 2, 2013 at 12:34 AM, Azuryy Yu <az...@gmail.com> wrote:

> This should be fixed in hadoop-1.1.2 stable release.
> if we determine completedMapsInputSize is zero, then job's map tasks MUST
> be zero, so the estimated output size is zero.
> below is the code:
>
>   long getEstimatedMapOutputSize() {
>     long estimate = 0L;
>     if (job.desiredMaps() > 0) {
>       estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
>     }
>     return estimate;
>   }
>
>
>
> On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>>  do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
>> reduno1985@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
just add more, continue the above thread:

  protected synchronized long getEstimatedTotalMapOutputSize()  {
    if(completedMapsUpdates < threshholdToUse) {
      return 0;
    } else {
      long inputSize = job.getInputLength() + job.desiredMaps();
      //add desiredMaps() so that randomwriter case doesn't blow up
      //the multiplication might lead to overflow, casting it with
      //double prevents it
      long estimate = Math.round(((double)inputSize *
          completedMapsOutputSize * 2.0)/completedMapsInputSize);
      if (LOG.isDebugEnabled()) {
        LOG.debug("estimate total map output will be " + estimate);
      }
      return estimate;
    }
  }


On Sun, Jun 2, 2013 at 12:34 AM, Azuryy Yu <az...@gmail.com> wrote:

> This should be fixed in hadoop-1.1.2 stable release.
> if we determine completedMapsInputSize is zero, then job's map tasks MUST
> be zero, so the estimated output size is zero.
> below is the code:
>
>   long getEstimatedMapOutputSize() {
>     long estimate = 0L;
>     if (job.desiredMaps() > 0) {
>       estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
>     }
>     return estimate;
>   }
>
>
>
> On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>>  do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
>> reduno1985@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
just add more, continue the above thread:

  protected synchronized long getEstimatedTotalMapOutputSize()  {
    if(completedMapsUpdates < threshholdToUse) {
      return 0;
    } else {
      long inputSize = job.getInputLength() + job.desiredMaps();
      //add desiredMaps() so that randomwriter case doesn't blow up
      //the multiplication might lead to overflow, casting it with
      //double prevents it
      long estimate = Math.round(((double)inputSize *
          completedMapsOutputSize * 2.0)/completedMapsInputSize);
      if (LOG.isDebugEnabled()) {
        LOG.debug("estimate total map output will be " + estimate);
      }
      return estimate;
    }
  }


On Sun, Jun 2, 2013 at 12:34 AM, Azuryy Yu <az...@gmail.com> wrote:

> This should be fixed in hadoop-1.1.2 stable release.
> if we determine completedMapsInputSize is zero, then job's map tasks MUST
> be zero, so the estimated output size is zero.
> below is the code:
>
>   long getEstimatedMapOutputSize() {
>     long estimate = 0L;
>     if (job.desiredMaps() > 0) {
>       estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
>     }
>     return estimate;
>   }
>
>
>
> On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>>  do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
>> reduno1985@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
just add more, continue the above thread:

  protected synchronized long getEstimatedTotalMapOutputSize()  {
    if(completedMapsUpdates < threshholdToUse) {
      return 0;
    } else {
      long inputSize = job.getInputLength() + job.desiredMaps();
      //add desiredMaps() so that randomwriter case doesn't blow up
      //the multiplication might lead to overflow, casting it with
      //double prevents it
      long estimate = Math.round(((double)inputSize *
          completedMapsOutputSize * 2.0)/completedMapsInputSize);
      if (LOG.isDebugEnabled()) {
        LOG.debug("estimate total map output will be " + estimate);
      }
      return estimate;
    }
  }


On Sun, Jun 2, 2013 at 12:34 AM, Azuryy Yu <az...@gmail.com> wrote:

> This should be fixed in hadoop-1.1.2 stable release.
> if we determine completedMapsInputSize is zero, then job's map tasks MUST
> be zero, so the estimated output size is zero.
> below is the code:
>
>   long getEstimatedMapOutputSize() {
>     long estimate = 0L;
>     if (job.desiredMaps() > 0) {
>       estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
>     }
>     return estimate;
>   }
>
>
>
> On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>>  do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
>> reduno1985@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
This should be fixed in hadoop-1.1.2 stable release.
if we determine completedMapsInputSize is zero, then job's map tasks MUST
be zero, so the estimated output size is zero.
below is the code:

  long getEstimatedMapOutputSize() {
    long estimate = 0L;
    if (job.desiredMaps() > 0) {
      estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
    }
    return estimate;
  }



On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Alex,

you gave me the right perspective ... pi works ;-) . It's finally satisfactory seeing it at work.
The job finished without problems.
I'll try some other test programs such as grep, to check that there are no problems with input files.
Thanks,

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Alex,

you gave me the right perspective ... pi works ;-) . It's finally satisfactory seeing it at work.
The job finished without problems.
I'll try some other test programs such as grep, to check that there are no problems with input files.
Thanks,

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Alex,

you gave me the right perspective ... pi works ;-) . It's finally satisfactory seeing it at work.
The job finished without problems.
I'll try some other test programs such as grep, to check that there are no problems with input files.
Thanks,

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi all,

I finally solved the problem. It was due to the cloud middleware I used to run the Hadoop VMs.
The domain type in the libvirt xm file was incorrectly set to 'qemu'. Once I fixed this and changed to 'kvm' everything started to work properly.
Thanks for the support.

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi all,

I finally solved the problem. It was due to the cloud middleware I used to run the Hadoop VMs.
The domain type in the libvirt xm file was incorrectly set to 'qemu'. Once I fixed this and changed to 'kvm' everything started to work properly.
Thanks for the support.

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi all,

I finally solved the problem. It was due to the cloud middleware I used to run the Hadoop VMs.
The domain type in the libvirt xm file was incorrectly set to 'qemu'. Once I fixed this and changed to 'kvm' everything started to work properly.
Thanks for the support.

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi all,

I finally solved the problem. It was due to the cloud middleware I used to run the Hadoop VMs.
The domain type in the libvirt xm file was incorrectly set to 'qemu'. Once I fixed this and changed to 'kvm' everything started to work properly.
Thanks for the support.

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Alex,

you gave me the right perspective ... pi works ;-) . It's finally satisfactory seeing it at work.
The job finished without problems.
I'll try some other test programs such as grep, to check that there are no problems with input files.
Thanks,

Matteo


On Jun 4, 2013, at 5:43 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi Matteo,
> 
> Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?
> 
> - Alex
> 
> On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:
> 
>> Hi again,
>> 
>> unfortunately my problem is not solved.
>> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
>> No security, no ACLs, default scheduler ... The files are attached.
>> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
>> How can I increase the debug level to have a deeper look?
>> Thanks,
>> 
>> Matteo
>> 
>> 
>> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
>> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
>> 
>>> Hi Harsh,
>>> 
>>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Azuryy,
>>> 
>>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>>> there's been a regression, can you comment that on the JIRA?
>>> 
>>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>>> 
>>>> 
>>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>>> 
>>>>> Hi Azuryy,
>>>>> 
>>>>> thanks for the update. Sorry for the silly question, but where can I
>>>>> download the patched version?
>>>>> If I look into the closest mirror (i.e.
>>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>>> Thanks in advance,
>>>>> 
>>>>> Matteo
>>>>> 
>>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>>> any security, and the problem is there.
>>>>> 
>>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>>> 
>>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>>> bug you facing now.
>>>>>> 
>>>>>> --Send from my Sony mobile.
>>>>>> 
>>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>>> causing this.
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>>> doesn't have anything to do with security really.
>>>>>> 
>>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>>> on the mapreduce-dev lists.
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>>> wrote:
>>>>>>> HI Harsh,
>>>>>>> 
>>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>>> 'uses
>>>>>>> security' as he mentioned?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> Shahab
>>>>>>> 
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>>> 
>>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>>> Long.MAX_VALUE,
>>>>>>>> or 8 exbibytes.
>>>>>>>> 
>>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>>> issue
>>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>>> reproducible case.
>>>>>>>> 
>>>>>>>> Does this happen consistently for you?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>>> 
>>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>>> wrote:
>>>>>>>>> Hi all,
>>>>>>>>> 
>>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>>> default
>>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>>> virtual
>>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>>> node is
>>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>>> file is
>>>>>>>>> about 600 kB and the error is
>>>>>>>>> 
>>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>>> but we
>>>>>>>>> expect map to take 9223372036854775807
>>>>>>>>> 
>>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>>> version I'm using is
>>>>>>>>> 
>>>>>>>>> Hadoop 1.2.0
>>>>>>>>> Subversion
>>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>>> -r
>>>>>>>>> 1479473
>>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>>> This command was run using
>>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>>> 
>>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>>> succeeds.
>>>>>>>>> 
>>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>>> possible
>>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>>> 
>>>>>>>>> Thanks in advance.
>>>>>>>>> 
>>>>>>>>> Matteo
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>>> bug
>>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>>> show
>>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Input
>>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>>> 7
>>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Job
>>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>>> and 1
>>>>>>>>>>> reduce tasks.
>>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>>> Adding
>>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> Task
>>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>>> No
>>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>>> free;
>>>>>>>>>>> but we
>>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>>> 
>>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>>> simple
>>>>>>>>>>>> example that i am running .
>>>>>>>>>>>> 
>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>>> see
>>>>>>>>>>>> do
>>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>>> The
>>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>>> 
>>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks
>>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>>> each
>>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>>> space
>>>>>>>>>>>>>> is in
>>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>>> not
>>>>>>>>>>>>>> have
>>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>>> files
>>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>>> map to
>>>>>>>>>> take
>>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Matteo Lanati
>>>>>>>>> Distributed Resources Group
>>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>>> Boltzmannstrasse 1
>>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>>> Phone: +49 89 35831 8724
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> --
>>>>>>>> Harsh J
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Harsh J
>>>>>> 
>>>>> 
>>>>> Matteo Lanati
>>>>> Distributed Resources Group
>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>> Boltzmannstrasse 1
>>>>> 85748   Garching b. München     (Germany)
>>>>> Phone: +49 89 35831 8724
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> --
>>> Harsh J
>>> 
>> 
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748	Garching b. München	(Germany)
>> Phone: +49 89 35831 8724
>> <core-site.xml><hdfs-site.xml><mapred-site.xml>
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Matteo,

Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?

- Alex

On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:

> Hi again,
> 
> unfortunately my problem is not solved.
> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
> No security, no ACLs, default scheduler ... The files are attached.
> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
> How can I increase the debug level to have a deeper look?
> Thanks,
> 
> Matteo
> 
> 
> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
> 
>> Hi Harsh,
>> 
>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>> 
>> 
>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>> Azuryy,
>> 
>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>> there's been a regression, can you comment that on the JIRA?
>> 
>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>> 
>>>> Hi Azuryy,
>>>> 
>>>> thanks for the update. Sorry for the silly question, but where can I
>>>> download the patched version?
>>>> If I look into the closest mirror (i.e.
>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>> Thanks in advance,
>>>> 
>>>> Matteo
>>>> 
>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>> any security, and the problem is there.
>>>> 
>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>> 
>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>> bug you facing now.
>>>>> 
>>>>> --Send from my Sony mobile.
>>>>> 
>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>> causing this.
>>>>> 
>>>>> Regards,
>>>>> Shahab
>>>>> 
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>> doesn't have anything to do with security really.
>>>>> 
>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>> on the mapreduce-dev lists.
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>> wrote:
>>>>>> HI Harsh,
>>>>>> 
>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>> 'uses
>>>>>> security' as he mentioned?
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>> 
>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>> Long.MAX_VALUE,
>>>>>>> or 8 exbibytes.
>>>>>>> 
>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>> issue
>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>> reproducible case.
>>>>>>> 
>>>>>>> Does this happen consistently for you?
>>>>>>> 
>>>>>>> [1]
>>>>>>> 
>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>> wrote:
>>>>>>>> Hi all,
>>>>>>>> 
>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>> default
>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>> virtual
>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>> node is
>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>> file is
>>>>>>>> about 600 kB and the error is
>>>>>>>> 
>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>> but we
>>>>>>>> expect map to take 9223372036854775807
>>>>>>>> 
>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>> version I'm using is
>>>>>>>> 
>>>>>>>> Hadoop 1.2.0
>>>>>>>> Subversion
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>> -r
>>>>>>>> 1479473
>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>> This command was run using
>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>> 
>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>> succeeds.
>>>>>>>> 
>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>> possible
>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>> 
>>>>>>>> Thanks in advance.
>>>>>>>> 
>>>>>>>> Matteo
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>> bug
>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>> show
>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>> 
>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Input
>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>> 7
>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Job
>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>> and 1
>>>>>>>>>> reduce tasks.
>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>> Adding
>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Task
>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>> 1317624576693539401
>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>> simple
>>>>>>>>>>> example that i am running .
>>>>>>>>>>> 
>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>> see
>>>>>>>>>>> do
>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>> 
>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>> 
>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>> The
>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>> 
>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>> each
>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>> space
>>>>>>>>>>>>> is in
>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>> not
>>>>>>>>>>>>> have
>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>> files
>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>> map to
>>>>>>>>> take
>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Matteo Lanati
>>>>>>>> Distributed Resources Group
>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>> Boltzmannstrasse 1
>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>> Phone: +49 89 35831 8724
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Harsh J
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Harsh J
>>>>> 
>>>> 
>>>> Matteo Lanati
>>>> Distributed Resources Group
>>>> Leibniz-Rechenzentrum (LRZ)
>>>> Boltzmannstrasse 1
>>>> 85748   Garching b. München     (Germany)
>>>> Phone: +49 89 35831 8724
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Harsh J
>> 
> 
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748	Garching b. München	(Germany)
> Phone: +49 89 35831 8724
> <core-site.xml><hdfs-site.xml><mapred-site.xml>


Re:

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Matteo,

Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?

- Alex

On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:

> Hi again,
> 
> unfortunately my problem is not solved.
> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
> No security, no ACLs, default scheduler ... The files are attached.
> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
> How can I increase the debug level to have a deeper look?
> Thanks,
> 
> Matteo
> 
> 
> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
> 
>> Hi Harsh,
>> 
>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>> 
>> 
>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>> Azuryy,
>> 
>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>> there's been a regression, can you comment that on the JIRA?
>> 
>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>> 
>>>> Hi Azuryy,
>>>> 
>>>> thanks for the update. Sorry for the silly question, but where can I
>>>> download the patched version?
>>>> If I look into the closest mirror (i.e.
>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>> Thanks in advance,
>>>> 
>>>> Matteo
>>>> 
>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>> any security, and the problem is there.
>>>> 
>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>> 
>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>> bug you facing now.
>>>>> 
>>>>> --Send from my Sony mobile.
>>>>> 
>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>> causing this.
>>>>> 
>>>>> Regards,
>>>>> Shahab
>>>>> 
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>> doesn't have anything to do with security really.
>>>>> 
>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>> on the mapreduce-dev lists.
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>> wrote:
>>>>>> HI Harsh,
>>>>>> 
>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>> 'uses
>>>>>> security' as he mentioned?
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>> 
>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>> Long.MAX_VALUE,
>>>>>>> or 8 exbibytes.
>>>>>>> 
>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>> issue
>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>> reproducible case.
>>>>>>> 
>>>>>>> Does this happen consistently for you?
>>>>>>> 
>>>>>>> [1]
>>>>>>> 
>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>> wrote:
>>>>>>>> Hi all,
>>>>>>>> 
>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>> default
>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>> virtual
>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>> node is
>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>> file is
>>>>>>>> about 600 kB and the error is
>>>>>>>> 
>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>> but we
>>>>>>>> expect map to take 9223372036854775807
>>>>>>>> 
>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>> version I'm using is
>>>>>>>> 
>>>>>>>> Hadoop 1.2.0
>>>>>>>> Subversion
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>> -r
>>>>>>>> 1479473
>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>> This command was run using
>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>> 
>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>> succeeds.
>>>>>>>> 
>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>> possible
>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>> 
>>>>>>>> Thanks in advance.
>>>>>>>> 
>>>>>>>> Matteo
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>> bug
>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>> show
>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>> 
>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Input
>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>> 7
>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Job
>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>> and 1
>>>>>>>>>> reduce tasks.
>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>> Adding
>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Task
>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>> 1317624576693539401
>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>> simple
>>>>>>>>>>> example that i am running .
>>>>>>>>>>> 
>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>> see
>>>>>>>>>>> do
>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>> 
>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>> 
>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>> The
>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>> 
>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>> each
>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>> space
>>>>>>>>>>>>> is in
>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>> not
>>>>>>>>>>>>> have
>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>> files
>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>> map to
>>>>>>>>> take
>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Matteo Lanati
>>>>>>>> Distributed Resources Group
>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>> Boltzmannstrasse 1
>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>> Phone: +49 89 35831 8724
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Harsh J
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Harsh J
>>>>> 
>>>> 
>>>> Matteo Lanati
>>>> Distributed Resources Group
>>>> Leibniz-Rechenzentrum (LRZ)
>>>> Boltzmannstrasse 1
>>>> 85748   Garching b. München     (Germany)
>>>> Phone: +49 89 35831 8724
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Harsh J
>> 
> 
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748	Garching b. München	(Germany)
> Phone: +49 89 35831 8724
> <core-site.xml><hdfs-site.xml><mapred-site.xml>


Re:

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Matteo,

Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?

- Alex

On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:

> Hi again,
> 
> unfortunately my problem is not solved.
> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
> No security, no ACLs, default scheduler ... The files are attached.
> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
> How can I increase the debug level to have a deeper look?
> Thanks,
> 
> Matteo
> 
> 
> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
> 
>> Hi Harsh,
>> 
>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>> 
>> 
>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>> Azuryy,
>> 
>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>> there's been a regression, can you comment that on the JIRA?
>> 
>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>> 
>>>> Hi Azuryy,
>>>> 
>>>> thanks for the update. Sorry for the silly question, but where can I
>>>> download the patched version?
>>>> If I look into the closest mirror (i.e.
>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>> Thanks in advance,
>>>> 
>>>> Matteo
>>>> 
>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>> any security, and the problem is there.
>>>> 
>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>> 
>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>> bug you facing now.
>>>>> 
>>>>> --Send from my Sony mobile.
>>>>> 
>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>> causing this.
>>>>> 
>>>>> Regards,
>>>>> Shahab
>>>>> 
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>> doesn't have anything to do with security really.
>>>>> 
>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>> on the mapreduce-dev lists.
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>> wrote:
>>>>>> HI Harsh,
>>>>>> 
>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>> 'uses
>>>>>> security' as he mentioned?
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>> 
>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>> Long.MAX_VALUE,
>>>>>>> or 8 exbibytes.
>>>>>>> 
>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>> issue
>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>> reproducible case.
>>>>>>> 
>>>>>>> Does this happen consistently for you?
>>>>>>> 
>>>>>>> [1]
>>>>>>> 
>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>> wrote:
>>>>>>>> Hi all,
>>>>>>>> 
>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>> default
>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>> virtual
>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>> node is
>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>> file is
>>>>>>>> about 600 kB and the error is
>>>>>>>> 
>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>> but we
>>>>>>>> expect map to take 9223372036854775807
>>>>>>>> 
>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>> version I'm using is
>>>>>>>> 
>>>>>>>> Hadoop 1.2.0
>>>>>>>> Subversion
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>> -r
>>>>>>>> 1479473
>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>> This command was run using
>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>> 
>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>> succeeds.
>>>>>>>> 
>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>> possible
>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>> 
>>>>>>>> Thanks in advance.
>>>>>>>> 
>>>>>>>> Matteo
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>> bug
>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>> show
>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>> 
>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Input
>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>> 7
>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Job
>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>> and 1
>>>>>>>>>> reduce tasks.
>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>> Adding
>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Task
>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>> 1317624576693539401
>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>> simple
>>>>>>>>>>> example that i am running .
>>>>>>>>>>> 
>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>> see
>>>>>>>>>>> do
>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>> 
>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>> 
>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>> The
>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>> 
>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>> each
>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>> space
>>>>>>>>>>>>> is in
>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>> not
>>>>>>>>>>>>> have
>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>> files
>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>> map to
>>>>>>>>> take
>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Matteo Lanati
>>>>>>>> Distributed Resources Group
>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>> Boltzmannstrasse 1
>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>> Phone: +49 89 35831 8724
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Harsh J
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Harsh J
>>>>> 
>>>> 
>>>> Matteo Lanati
>>>> Distributed Resources Group
>>>> Leibniz-Rechenzentrum (LRZ)
>>>> Boltzmannstrasse 1
>>>> 85748   Garching b. München     (Germany)
>>>> Phone: +49 89 35831 8724
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Harsh J
>> 
> 
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748	Garching b. München	(Germany)
> Phone: +49 89 35831 8724
> <core-site.xml><hdfs-site.xml><mapred-site.xml>


Re:

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi Matteo,

Are you able to add more space to your test machines? Also, what says the pi example (hadoop jar hadoop-examples pi 10 10 ?

- Alex

On Jun 4, 2013, at 4:34 PM, "Lanati, Matteo" <Ma...@lrz.de> wrote:

> Hi again,
> 
> unfortunately my problem is not solved.
> I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
> No security, no ACLs, default scheduler ... The files are attached.
> I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
> How can I increase the debug level to have a deeper look?
> Thanks,
> 
> Matteo
> 
> 
> [1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
> On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:
> 
>> Hi Harsh,
>> 
>> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
>> 
>> 
>> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
>> Azuryy,
>> 
>> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
>> there's been a regression, can you comment that on the JIRA?
>> 
>> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
>>> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>>> 
>>> 
>>> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>>> 
>>>> Hi Azuryy,
>>>> 
>>>> thanks for the update. Sorry for the silly question, but where can I
>>>> download the patched version?
>>>> If I look into the closest mirror (i.e.
>>>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>>>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>>>> Thanks in advance,
>>>> 
>>>> Matteo
>>>> 
>>>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>>>> any security, and the problem is there.
>>>> 
>>>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>>> 
>>>>> can you upgrade to 1.1.2, which is also a stable release, and fixed the
>>>>> bug you facing now.
>>>>> 
>>>>> --Send from my Sony mobile.
>>>>> 
>>>>> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>>>>> Thanks Harsh for the reply. I was confused too that why security is
>>>>> causing this.
>>>>> 
>>>>> Regards,
>>>>> Shahab
>>>>> 
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>>>>> Shahab - I see he has mentioned generally that security is enabled
>>>>> (but not that it happens iff security is enabled), and the issue here
>>>>> doesn't have anything to do with security really.
>>>>> 
>>>>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>>>>> on the mapreduce-dev lists.
>>>>> 
>>>>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>>>>> wrote:
>>>>>> HI Harsh,
>>>>>> 
>>>>>> Quick question though: why do you think it only happens if the OP
>>>>>> 'uses
>>>>>> security' as he mentioned?
>>>>>> 
>>>>>> Regards,
>>>>>> Shahab
>>>>>> 
>>>>>> 
>>>>>> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>>>>>> 
>>>>>>> Does smell like a bug as that number you get is simply
>>>>>>> Long.MAX_VALUE,
>>>>>>> or 8 exbibytes.
>>>>>>> 
>>>>>>> Looking at the sources, this turns out to be a rather funny Java
>>>>>>> issue
>>>>>>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>>>>>>> return in such a case). I've logged a bug report for this at
>>>>>>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>>>>>>> reproducible case.
>>>>>>> 
>>>>>>> Does this happen consistently for you?
>>>>>>> 
>>>>>>> [1]
>>>>>>> 
>>>>>>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>>>>>> 
>>>>>>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>>>>>>> wrote:
>>>>>>>> Hi all,
>>>>>>>> 
>>>>>>>> I stumbled upon this problem as well while trying to run the
>>>>>>>> default
>>>>>>>> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>>>>>>>> virtual
>>>>>>>> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>>>>>>>> node is
>>>>>>>> used as JT+NN, the other as TT+DN. Security is enabled. The input
>>>>>>>> file is
>>>>>>>> about 600 kB and the error is
>>>>>>>> 
>>>>>>>> 2013-06-01 12:22:51,999 WARN
>>>>>>>> org.apache.hadoop.mapred.JobInProgress: No
>>>>>>>> room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>>>>>>>> but we
>>>>>>>> expect map to take 9223372036854775807
>>>>>>>> 
>>>>>>>> The logfile is attached, together with the configuration files. The
>>>>>>>> version I'm using is
>>>>>>>> 
>>>>>>>> Hadoop 1.2.0
>>>>>>>> Subversion
>>>>>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>>>>>>>> -r
>>>>>>>> 1479473
>>>>>>>> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>>>>>>>> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>>>>>>>> This command was run using
>>>>>>>> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>>>>>>>> 
>>>>>>>> If I run the default configuration (i.e. no securty), then the job
>>>>>>>> succeeds.
>>>>>>>> 
>>>>>>>> Is there something missing in how I set up my nodes? How is it
>>>>>>>> possible
>>>>>>>> that the envisaged value for the needed space is so big?
>>>>>>>> 
>>>>>>>> Thanks in advance.
>>>>>>>> 
>>>>>>>> Matteo
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> Which version of Hadoop are you using. A quick search shows me a
>>>>>>>>> bug
>>>>>>>>> https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>>>>>>>>> show
>>>>>>>>> similar symptoms. However, that was fixed a long while ago.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>> 
>>>>>>>>>> This the content of the jobtracker log file :
>>>>>>>>>> 2013-03-23 12:06:48,912 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Input
>>>>>>>>>> size for job job_201303231139_0001 = 6950001. Number of splits =
>>>>>>>>>> 7
>>>>>>>>>> 2013-03-23 12:06:48,925 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000000 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,927 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000001 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,930 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000002 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,931 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000003 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,933 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000004 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,934 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000005 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,939 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> tip:task_201303231139_0001_m_000006 has split on
>>>>>>>>>> node:/default-rack/hadoop0.novalocal
>>>>>>>>>> 2013-03-23 12:06:48,950 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>>>>>>>>> 2013-03-23 12:06:48,978 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Job
>>>>>>>>>> job_201303231139_0001 initialized successfully with 7 map tasks
>>>>>>>>>> and 1
>>>>>>>>>> reduce tasks.
>>>>>>>>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>>>>>>>>>> Adding
>>>>>>>>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>>>>>>>>> task_201303231139_0001_m_000008, for tracker
>>>>>>>>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>>>>>>>>> 2013-03-23 12:08:00,340 INFO
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> Task
>>>>>>>>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>>>>>>>>> task_201303231139_0001_m_000008 successfully.
>>>>>>>>>> 2013-03-23 12:08:00,538 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,543 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:00,544 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 2013-03-23 12:08:01,264 WARN
>>>>>>>>>> org.apache.hadoop.mapred.JobInProgress:
>>>>>>>>>> No
>>>>>>>>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>>>>>>>>>> free;
>>>>>>>>>> but we
>>>>>>>>>> expect map to take 1317624576693539401
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> The value in we excpect map to take is too huge
>>>>>>>>>> 1317624576693539401
>>>>>>>>>> bytes  !!!!!!!
>>>>>>>>>> 
>>>>>>>>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>>> The estimated value that the hadoop compute is too huge for the
>>>>>>>>>>> simple
>>>>>>>>>>> example that i am running .
>>>>>>>>>>> 
>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>> Date: Sat, Mar 23, 2013 at 11:32 AM
>>>>>>>>>>> Subject: Re: About running a simple wordcount mapreduce
>>>>>>>>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>>>>>>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> This the output that I get I am running two machines  as you can
>>>>>>>>>>> see
>>>>>>>>>>> do
>>>>>>>>>>> u see anything suspicious ?
>>>>>>>>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>>>>>>>>> Present Capacity: 17615499264 (16.41 GB)
>>>>>>>>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>>>>>>>>> DFS Used: 57344 (56 KB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>> 
>>>>>>>>>>> -------------------------------------------------
>>>>>>>>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.6:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807800832(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.31%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> Name: 11.1.0.3:50010
>>>>>>>>>>> Decommission Status : Normal
>>>>>>>>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>>>>>>>>> DFS Used: 28672 (28 KB)
>>>>>>>>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>>>>>>>>> DFS Remaining: 8807641088(8.2 GB)
>>>>>>>>>>> DFS Used%: 0%
>>>>>>>>>>> DFS Remaining%: 83.3%
>>>>>>>>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Hi Redwane,
>>>>>>>>>>>> 
>>>>>>>>>>>> Please run the following command as hdfs user on any datanode.
>>>>>>>>>>>> The
>>>>>>>>>>>> output will be something like this. Hope this helps
>>>>>>>>>>>> 
>>>>>>>>>>>> hadoop dfsadmin -report
>>>>>>>>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>>>>>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>>>>>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>>>>>>>>> DFS Used: 480129024 (457.89 MB)
>>>>>>>>>>>> DFS Used%: 0.68%
>>>>>>>>>>>> Under replicated blocks: 0
>>>>>>>>>>>> Blocks with corrupt replicas: 0
>>>>>>>>>>>> Missing blocks: 0
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks
>>>>>>>>>>>> -Abdelrahman
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>>>>>>>>>>>> <re...@googlemail.com>wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I have my hosts running on openstack virtual machine instances
>>>>>>>>>>>>> each
>>>>>>>>>>>>> instance has 10gb hard disc . Is there a way too see how much
>>>>>>>>>>>>> space
>>>>>>>>>>>>> is in
>>>>>>>>>>>>> the hdfs without web ui .
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from Samsung Mobile
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>>>>>>>>> Check web ui how much space you have on hdfs???
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Sent from my iPhone
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>>>>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi Redwane ,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> It is possible that the hosts which are running tasks are do
>>>>>>>>>>>>> not
>>>>>>>>>>>>> have
>>>>>>>>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>>>>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> ---------- Forwarded message ----------
>>>>>>>>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>>>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>>>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>>>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>> I am trying to run  a wordcount mapreduce job on several
>>>>>>>>>>>>>> files
>>>>>>>>>>>>>> (<20
>>>>>>>>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>>>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>>>>>>>> WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>>>>>>>>>>>>>> task.
>>>>>>>>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>>>>>>>>>>>>>> expect
>>>>>>>>>>>>>> map to
>>>>>>>>> take
>>>>>>>>>>>>>> 1317624576693539401
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Please help me ,
>>>>>>>>>>>>>> Best Regards,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Matteo Lanati
>>>>>>>> Distributed Resources Group
>>>>>>>> Leibniz-Rechenzentrum (LRZ)
>>>>>>>> Boltzmannstrasse 1
>>>>>>>> 85748 Garching b. München (Germany)
>>>>>>>> Phone: +49 89 35831 8724
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Harsh J
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Harsh J
>>>>> 
>>>> 
>>>> Matteo Lanati
>>>> Distributed Resources Group
>>>> Leibniz-Rechenzentrum (LRZ)
>>>> Boltzmannstrasse 1
>>>> 85748   Garching b. München     (Germany)
>>>> Phone: +49 89 35831 8724
>>>> 
>>> 
>> 
>> 
>> 
>> --
>> Harsh J
>> 
> 
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748	Garching b. München	(Germany)
> Phone: +49 89 35831 8724
> <core-site.xml><hdfs-site.xml><mapred-site.xml>


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi again,

unfortunately my problem is not solved.
I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
No security, no ACLs, default scheduler ... The files are attached.
I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
How can I increase the debug level to have a deeper look?
Thanks,

Matteo


[1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:

> Hi Harsh,
> 
> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
> 
> 
> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
> Azuryy,
> 
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
> 
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files. The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724

Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi again,

unfortunately my problem is not solved.
I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
No security, no ACLs, default scheduler ... The files are attached.
I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
How can I increase the debug level to have a deeper look?
Thanks,

Matteo


[1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:

> Hi Harsh,
> 
> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
> 
> 
> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
> Azuryy,
> 
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
> 
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files. The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724

Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi again,

unfortunately my problem is not solved.
I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
No security, no ACLs, default scheduler ... The files are attached.
I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
How can I increase the debug level to have a deeper look?
Thanks,

Matteo


[1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:

> Hi Harsh,
> 
> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
> 
> 
> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
> Azuryy,
> 
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
> 
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files. The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724

Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi again,

unfortunately my problem is not solved.
I downloaded Hadoop v. 1.1.2a and made a basic configuration as suggested in [1].
No security, no ACLs, default scheduler ... The files are attached.
I still have the same error message. I also tried another Java version (6u45 instead of 7u21).
How can I increase the debug level to have a deeper look?
Thanks,

Matteo


[1] http://hadoop.apache.org/docs/r1.1.2/cluster_setup.html#Cluster+Restartability
On Jun 4, 2013, at 3:52 AM, Azuryy Yu <az...@gmail.com> wrote:

> Hi Harsh,
> 
> I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry.
> 
> 
> On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:
> Azuryy,
> 
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
> 
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files. The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724

Re:

Posted by Azuryy Yu <az...@gmail.com>.
Hi Harsh,

I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said
upgrade. Sorry.


On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:

> Azuryy,
>
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
>
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so
> without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed
> the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com>
> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <shahab.yunus@gmail.com
> >
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com>
> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <
> Matteo.Lanati@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files.
> The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> >
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the
> job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits
> =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO
> org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for
> the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you
> can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any
> datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine
> instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how
> much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui
> <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <
> reduno1985@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for
> map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
Hi Harsh,

I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said
upgrade. Sorry.


On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:

> Azuryy,
>
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
>
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so
> without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed
> the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com>
> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <shahab.yunus@gmail.com
> >
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com>
> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <
> Matteo.Lanati@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files.
> The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> >
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the
> job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits
> =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO
> org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for
> the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you
> can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any
> datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine
> instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how
> much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui
> <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <
> reduno1985@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for
> map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
Hi Harsh,

I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said
upgrade. Sorry.


On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:

> Azuryy,
>
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
>
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so
> without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed
> the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com>
> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <shahab.yunus@gmail.com
> >
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com>
> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <
> Matteo.Lanati@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files.
> The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> >
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the
> job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits
> =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO
> org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for
> the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you
> can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any
> datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine
> instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how
> much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui
> <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <
> reduno1985@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for
> map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
Hi Harsh,

I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said
upgrade. Sorry.


On Tue, Jun 4, 2013 at 9:46 AM, Harsh J <ha...@cloudera.com> wrote:

> Azuryy,
>
> 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
> there's been a regression, can you comment that on the JIRA?
>
> On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> > yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
> >
> >
> > On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> >>
> >> Hi Azuryy,
> >>
> >> thanks for the update. Sorry for the silly question, but where can I
> >> download the patched version?
> >> If I look into the closest mirror (i.e.
> >> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the
> >> Hadoop 1.1.2 version was last updated on Jan. 31st.
> >> Thanks in advance,
> >>
> >> Matteo
> >>
> >> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so
> without
> >> any security, and the problem is there.
> >>
> >> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
> >>
> >> > can you upgrade to 1.1.2, which is also a stable release, and fixed
> the
> >> > bug you facing now.
> >> >
> >> > --Send from my Sony mobile.
> >> >
> >> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com>
> wrote:
> >> > Thanks Harsh for the reply. I was confused too that why security is
> >> > causing this.
> >> >
> >> > Regards,
> >> > Shahab
> >> >
> >> >
> >> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> >> > Shahab - I see he has mentioned generally that security is enabled
> >> > (but not that it happens iff security is enabled), and the issue here
> >> > doesn't have anything to do with security really.
> >> >
> >> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> >> > on the mapreduce-dev lists.
> >> >
> >> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <shahab.yunus@gmail.com
> >
> >> > wrote:
> >> > > HI Harsh,
> >> > >
> >> > > Quick question though: why do you think it only happens if the OP
> >> > > 'uses
> >> > > security' as he mentioned?
> >> > >
> >> > > Regards,
> >> > > Shahab
> >> > >
> >> > >
> >> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com>
> wrote:
> >> > >>
> >> > >> Does smell like a bug as that number you get is simply
> >> > >> Long.MAX_VALUE,
> >> > >> or 8 exbibytes.
> >> > >>
> >> > >> Looking at the sources, this turns out to be a rather funny Java
> >> > >> issue
> >> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> > >> return in such a case). I've logged a bug report for this at
> >> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> > >> reproducible case.
> >> > >>
> >> > >> Does this happen consistently for you?
> >> > >>
> >> > >> [1]
> >> > >>
> >> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >> > >>
> >> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <
> Matteo.Lanati@lrz.de>
> >> > >> wrote:
> >> > >> > Hi all,
> >> > >> >
> >> > >> > I stumbled upon this problem as well while trying to run the
> >> > >> > default
> >> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> >> > >> > virtual
> >> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> >> > >> > node is
> >> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> >> > >> > file is
> >> > >> > about 600 kB and the error is
> >> > >> >
> >> > >> > 2013-06-01 12:22:51,999 WARN
> >> > >> > org.apache.hadoop.mapred.JobInProgress: No
> >> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> >> > >> > but we
> >> > >> > expect map to take 9223372036854775807
> >> > >> >
> >> > >> > The logfile is attached, together with the configuration files.
> The
> >> > >> > version I'm using is
> >> > >> >
> >> > >> > Hadoop 1.2.0
> >> > >> > Subversion
> >> > >> >
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
> >> > >> > -r
> >> > >> > 1479473
> >> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > >> > This command was run using
> >> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> > >> >
> >> > >> > If I run the default configuration (i.e. no securty), then the
> job
> >> > >> > succeeds.
> >> > >> >
> >> > >> > Is there something missing in how I set up my nodes? How is it
> >> > >> > possible
> >> > >> > that the envisaged value for the needed space is so big?
> >> > >> >
> >> > >> > Thanks in advance.
> >> > >> >
> >> > >> > Matteo
> >> > >> >
> >> > >> >
> >> > >> >
> >> > >> >>Which version of Hadoop are you using. A quick search shows me a
> >> > >> >> bug
> >> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> >> > >> >> show
> >> > >> >>similar symptoms. However, that was fixed a long while ago.
> >> > >> >>
> >> > >> >>
> >> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> > >> >>reduno1985@googlemail.com> wrote:
> >> > >> >>
> >> > >> >>> This the content of the jobtracker log file :
> >> > >> >>> 2013-03-23 12:06:48,912 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Input
> >> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits
> =
> >> > >> >>> 7
> >> > >> >>> 2013-03-23 12:06:48,925 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,927 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,930 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,931 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,933 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,934 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,939 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> > >> >>> node:/default-rack/hadoop0.novalocal
> >> > >> >>> 2013-03-23 12:06:48,950 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> > >> >>> 2013-03-23 12:06:48,978 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Job
> >> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> >> > >> >>> and 1
> >> > >> >>> reduce tasks.
> >> > >> >>> 2013-03-23 12:06:50,855 INFO
> org.apache.hadoop.mapred.JobTracker:
> >> > >> >>> Adding
> >> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> > >> >>> task_201303231139_0001_m_000008, for tracker
> >> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> > >> >>> 2013-03-23 12:08:00,340 INFO
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> Task
> >> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> > >> >>> task_201303231139_0001_m_000008 successfully.
> >> > >> >>> 2013-03-23 12:08:00,538 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,543 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:00,544 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>> 2013-03-23 12:08:01,264 WARN
> >> > >> >>> org.apache.hadoop.mapred.JobInProgress:
> >> > >> >>> No
> >> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> >> > >> >>> free;
> >> > >> >>> but we
> >> > >> >>> expect map to take 1317624576693539401
> >> > >> >>>
> >> > >> >>>
> >> > >> >>> The value in we excpect map to take is too huge
> >> > >> >>> 1317624576693539401
> >> > >> >>> bytes  !!!!!!!
> >> > >> >>>
> >> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> > >> >>> reduno1985@googlemail.com> wrote:
> >> > >> >>>
> >> > >> >>>> The estimated value that the hadoop compute is too huge for
> the
> >> > >> >>>> simple
> >> > >> >>>> example that i am running .
> >> > >> >>>>
> >> > >> >>>> ---------- Forwarded message ----------
> >> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> This the output that I get I am running two machines  as you
> can
> >> > >> >>>> see
> >> > >> >>>> do
> >> > >> >>>> u see anything suspicious ?
> >> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> > >> >>>> DFS Used: 57344 (56 KB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> Under replicated blocks: 0
> >> > >> >>>> Blocks with corrupt replicas: 0
> >> > >> >>>> Missing blocks: 0
> >> > >> >>>>
> >> > >> >>>> -------------------------------------------------
> >> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.6:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.31%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> Name: 11.1.0.3:50010
> >> > >> >>>> Decommission Status : Normal
> >> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> > >> >>>> DFS Used: 28672 (28 KB)
> >> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> > >> >>>> DFS Used%: 0%
> >> > >> >>>> DFS Remaining%: 83.3%
> >> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> > >> >>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>
> >> > >> >>>>> Hi Redwane,
> >> > >> >>>>>
> >> > >> >>>>> Please run the following command as hdfs user on any
> datanode.
> >> > >> >>>>> The
> >> > >> >>>>> output will be something like this. Hope this helps
> >> > >> >>>>>
> >> > >> >>>>> hadoop dfsadmin -report
> >> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> > >> >>>>> DFS Used%: 0.68%
> >> > >> >>>>> Under replicated blocks: 0
> >> > >> >>>>> Blocks with corrupt replicas: 0
> >> > >> >>>>> Missing blocks: 0
> >> > >> >>>>>
> >> > >> >>>>> Thanks
> >> > >> >>>>> -Abdelrahman
> >> > >> >>>>>
> >> > >> >>>>>
> >> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> > >> >>>>> <re...@googlemail.com>wrote:
> >> > >> >>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> I have my hosts running on openstack virtual machine
> instances
> >> > >> >>>>>> each
> >> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how
> much
> >> > >> >>>>>> space
> >> > >> >>>>>> is in
> >> > >> >>>>>> the hdfs without web ui .
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from Samsung Mobile
> >> > >> >>>>>>
> >> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> > >> >>>>>> Check web ui how much space you have on hdfs???
> >> > >> >>>>>>
> >> > >> >>>>>> Sent from my iPhone
> >> > >> >>>>>>
> >> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> > >> >>>>>> ashettia@hortonworks.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>> Hi Redwane ,
> >> > >> >>>>>>
> >> > >> >>>>>> It is possible that the hosts which are running tasks are do
> >> > >> >>>>>> not
> >> > >> >>>>>> have
> >> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui
> <
> >> > >> >>>>>> reduno1985@googlemail.com> wrote:
> >> > >> >>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> ---------- Forwarded message ----------
> >> > >> >>>>>>> From: Redwane belmaati cherkaoui <
> reduno1985@googlemail.com>
> >> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>> Hi
> >> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
> >> > >> >>>>>>> files
> >> > >> >>>>>>> (<20
> >> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> > >> >>>>>>> The jobtracker log file shows the following warning:
> >> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for
> map
> >> > >> >>>>>>> task.
> >> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> >> > >> >>>>>>> expect
> >> > >> >>>>>>> map to
> >> > >> >>take
> >> > >> >>>>>>> 1317624576693539401
> >> > >> >>>>>>>
> >> > >> >>>>>>> Please help me ,
> >> > >> >>>>>>> Best Regards,
> >> > >> >>>>>>>
> >> > >> >>>>>>>
> >> > >> >>>>>>
> >> > >> >>>>>
> >> > >> >>>>
> >> > >> >>>>
> >> > >> >>>
> >> > >> >
> >> > >> >
> >> > >> > Matteo Lanati
> >> > >> > Distributed Resources Group
> >> > >> > Leibniz-Rechenzentrum (LRZ)
> >> > >> > Boltzmannstrasse 1
> >> > >> > 85748 Garching b. München (Germany)
> >> > >> > Phone: +49 89 35831 8724
> >> > >>
> >> > >>
> >> > >>
> >> > >> --
> >> > >> Harsh J
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Harsh J
> >> >
> >>
> >> Matteo Lanati
> >> Distributed Resources Group
> >> Leibniz-Rechenzentrum (LRZ)
> >> Boltzmannstrasse 1
> >> 85748   Garching b. München     (Germany)
> >> Phone: +49 89 35831 8724
> >>
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Harsh J <ha...@cloudera.com>.
Azuryy,

1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
there's been a regression, can you comment that on the JIRA?

On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>
>
> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>
>> Hi Azuryy,
>>
>> thanks for the update. Sorry for the silly question, but where can I
>> download the patched version?
>> If I look into the closest mirror (i.e.
>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>> Thanks in advance,
>>
>> Matteo
>>
>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>> any security, and the problem is there.
>>
>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
>> > bug you facing now.
>> >
>> > --Send from my Sony mobile.
>> >
>> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>> > Thanks Harsh for the reply. I was confused too that why security is
>> > causing this.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>> > Shahab - I see he has mentioned generally that security is enabled
>> > (but not that it happens iff security is enabled), and the issue here
>> > doesn't have anything to do with security really.
>> >
>> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> > on the mapreduce-dev lists.
>> >
>> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> > > HI Harsh,
>> > >
>> > > Quick question though: why do you think it only happens if the OP
>> > > 'uses
>> > > security' as he mentioned?
>> > >
>> > > Regards,
>> > > Shahab
>> > >
>> > >
>> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> > >>
>> > >> Does smell like a bug as that number you get is simply
>> > >> Long.MAX_VALUE,
>> > >> or 8 exbibytes.
>> > >>
>> > >> Looking at the sources, this turns out to be a rather funny Java
>> > >> issue
>> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> > >> return in such a case). I've logged a bug report for this at
>> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> > >> reproducible case.
>> > >>
>> > >> Does this happen consistently for you?
>> > >>
>> > >> [1]
>> > >>
>> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> > >>
>> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> > >> wrote:
>> > >> > Hi all,
>> > >> >
>> > >> > I stumbled upon this problem as well while trying to run the
>> > >> > default
>> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> > >> > virtual
>> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> > >> > node is
>> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> > >> > file is
>> > >> > about 600 kB and the error is
>> > >> >
>> > >> > 2013-06-01 12:22:51,999 WARN
>> > >> > org.apache.hadoop.mapred.JobInProgress: No
>> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> > >> > but we
>> > >> > expect map to take 9223372036854775807
>> > >> >
>> > >> > The logfile is attached, together with the configuration files. The
>> > >> > version I'm using is
>> > >> >
>> > >> > Hadoop 1.2.0
>> > >> > Subversion
>> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>> > >> > -r
>> > >> > 1479473
>> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > >> > This command was run using
>> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> > >> >
>> > >> > If I run the default configuration (i.e. no securty), then the job
>> > >> > succeeds.
>> > >> >
>> > >> > Is there something missing in how I set up my nodes? How is it
>> > >> > possible
>> > >> > that the envisaged value for the needed space is so big?
>> > >> >
>> > >> > Thanks in advance.
>> > >> >
>> > >> > Matteo
>> > >> >
>> > >> >
>> > >> >
>> > >> >>Which version of Hadoop are you using. A quick search shows me a
>> > >> >> bug
>> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>> > >> >> show
>> > >> >>similar symptoms. However, that was fixed a long while ago.
>> > >> >>
>> > >> >>
>> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> > >> >>reduno1985@googlemail.com> wrote:
>> > >> >>
>> > >> >>> This the content of the jobtracker log file :
>> > >> >>> 2013-03-23 12:06:48,912 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Input
>> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
>> > >> >>> 7
>> > >> >>> 2013-03-23 12:06:48,925 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000000 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,927 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000001 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,930 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000002 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,931 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000003 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,933 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000004 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,934 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000005 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,939 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000006 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,950 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> > >> >>> 2013-03-23 12:06:48,978 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Job
>> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> > >> >>> and 1
>> > >> >>> reduce tasks.
>> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> > >> >>> Adding
>> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> > >> >>> task_201303231139_0001_m_000008, for tracker
>> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> > >> >>> 2013-03-23 12:08:00,340 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Task
>> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> > >> >>> task_201303231139_0001_m_000008 successfully.
>> > >> >>> 2013-03-23 12:08:00,538 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,543 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:01,264 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>>
>> > >> >>>
>> > >> >>> The value in we excpect map to take is too huge
>> > >> >>> 1317624576693539401
>> > >> >>> bytes  !!!!!!!
>> > >> >>>
>> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> > >> >>> reduno1985@googlemail.com> wrote:
>> > >> >>>
>> > >> >>>> The estimated value that the hadoop compute is too huge for the
>> > >> >>>> simple
>> > >> >>>> example that i am running .
>> > >> >>>>
>> > >> >>>> ---------- Forwarded message ----------
>> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> This the output that I get I am running two machines  as you can
>> > >> >>>> see
>> > >> >>>> do
>> > >> >>>> u see anything suspicious ?
>> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> > >> >>>> DFS Used: 57344 (56 KB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> Under replicated blocks: 0
>> > >> >>>> Blocks with corrupt replicas: 0
>> > >> >>>> Missing blocks: 0
>> > >> >>>>
>> > >> >>>> -------------------------------------------------
>> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.6:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.31%
>> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.3:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.3%
>> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> > >> >>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>
>> > >> >>>>> Hi Redwane,
>> > >> >>>>>
>> > >> >>>>> Please run the following command as hdfs user on any datanode.
>> > >> >>>>> The
>> > >> >>>>> output will be something like this. Hope this helps
>> > >> >>>>>
>> > >> >>>>> hadoop dfsadmin -report
>> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> > >> >>>>> DFS Used: 480129024 (457.89 MB)
>> > >> >>>>> DFS Used%: 0.68%
>> > >> >>>>> Under replicated blocks: 0
>> > >> >>>>> Blocks with corrupt replicas: 0
>> > >> >>>>> Missing blocks: 0
>> > >> >>>>>
>> > >> >>>>> Thanks
>> > >> >>>>> -Abdelrahman
>> > >> >>>>>
>> > >> >>>>>
>> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> > >> >>>>> <re...@googlemail.com>wrote:
>> > >> >>>>>
>> > >> >>>>>>
>> > >> >>>>>> I have my hosts running on openstack virtual machine instances
>> > >> >>>>>> each
>> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> > >> >>>>>> space
>> > >> >>>>>> is in
>> > >> >>>>>> the hdfs without web ui .
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> Sent from Samsung Mobile
>> > >> >>>>>>
>> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> > >> >>>>>> Check web ui how much space you have on hdfs???
>> > >> >>>>>>
>> > >> >>>>>> Sent from my iPhone
>> > >> >>>>>>
>> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> > >> >>>>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>> Hi Redwane ,
>> > >> >>>>>>
>> > >> >>>>>> It is possible that the hosts which are running tasks are do
>> > >> >>>>>> not
>> > >> >>>>>> have
>> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> > >> >>>>>> reduno1985@googlemail.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> ---------- Forwarded message ----------
>> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> Hi
>> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
>> > >> >>>>>>> files
>> > >> >>>>>>> (<20
>> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> > >> >>>>>>> The jobtracker log file shows the following warning:
>> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> > >> >>>>>>> task.
>> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>> > >> >>>>>>> expect
>> > >> >>>>>>> map to
>> > >> >>take
>> > >> >>>>>>> 1317624576693539401
>> > >> >>>>>>>
>> > >> >>>>>>> Please help me ,
>> > >> >>>>>>> Best Regards,
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>
>> > >> >>>>>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>
>> > >> >
>> > >> >
>> > >> > Matteo Lanati
>> > >> > Distributed Resources Group
>> > >> > Leibniz-Rechenzentrum (LRZ)
>> > >> > Boltzmannstrasse 1
>> > >> > 85748 Garching b. München (Germany)
>> > >> > Phone: +49 89 35831 8724
>> > >>
>> > >>
>> > >>
>> > >> --
>> > >> Harsh J
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> > Harsh J
>> >
>>
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748   Garching b. München     (Germany)
>> Phone: +49 89 35831 8724
>>
>



-- 
Harsh J

Re:

Posted by Harsh J <ha...@cloudera.com>.
Azuryy,

1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
there's been a regression, can you comment that on the JIRA?

On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>
>
> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>
>> Hi Azuryy,
>>
>> thanks for the update. Sorry for the silly question, but where can I
>> download the patched version?
>> If I look into the closest mirror (i.e.
>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>> Thanks in advance,
>>
>> Matteo
>>
>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>> any security, and the problem is there.
>>
>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
>> > bug you facing now.
>> >
>> > --Send from my Sony mobile.
>> >
>> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>> > Thanks Harsh for the reply. I was confused too that why security is
>> > causing this.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>> > Shahab - I see he has mentioned generally that security is enabled
>> > (but not that it happens iff security is enabled), and the issue here
>> > doesn't have anything to do with security really.
>> >
>> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> > on the mapreduce-dev lists.
>> >
>> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> > > HI Harsh,
>> > >
>> > > Quick question though: why do you think it only happens if the OP
>> > > 'uses
>> > > security' as he mentioned?
>> > >
>> > > Regards,
>> > > Shahab
>> > >
>> > >
>> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> > >>
>> > >> Does smell like a bug as that number you get is simply
>> > >> Long.MAX_VALUE,
>> > >> or 8 exbibytes.
>> > >>
>> > >> Looking at the sources, this turns out to be a rather funny Java
>> > >> issue
>> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> > >> return in such a case). I've logged a bug report for this at
>> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> > >> reproducible case.
>> > >>
>> > >> Does this happen consistently for you?
>> > >>
>> > >> [1]
>> > >>
>> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> > >>
>> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> > >> wrote:
>> > >> > Hi all,
>> > >> >
>> > >> > I stumbled upon this problem as well while trying to run the
>> > >> > default
>> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> > >> > virtual
>> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> > >> > node is
>> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> > >> > file is
>> > >> > about 600 kB and the error is
>> > >> >
>> > >> > 2013-06-01 12:22:51,999 WARN
>> > >> > org.apache.hadoop.mapred.JobInProgress: No
>> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> > >> > but we
>> > >> > expect map to take 9223372036854775807
>> > >> >
>> > >> > The logfile is attached, together with the configuration files. The
>> > >> > version I'm using is
>> > >> >
>> > >> > Hadoop 1.2.0
>> > >> > Subversion
>> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>> > >> > -r
>> > >> > 1479473
>> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > >> > This command was run using
>> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> > >> >
>> > >> > If I run the default configuration (i.e. no securty), then the job
>> > >> > succeeds.
>> > >> >
>> > >> > Is there something missing in how I set up my nodes? How is it
>> > >> > possible
>> > >> > that the envisaged value for the needed space is so big?
>> > >> >
>> > >> > Thanks in advance.
>> > >> >
>> > >> > Matteo
>> > >> >
>> > >> >
>> > >> >
>> > >> >>Which version of Hadoop are you using. A quick search shows me a
>> > >> >> bug
>> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>> > >> >> show
>> > >> >>similar symptoms. However, that was fixed a long while ago.
>> > >> >>
>> > >> >>
>> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> > >> >>reduno1985@googlemail.com> wrote:
>> > >> >>
>> > >> >>> This the content of the jobtracker log file :
>> > >> >>> 2013-03-23 12:06:48,912 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Input
>> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
>> > >> >>> 7
>> > >> >>> 2013-03-23 12:06:48,925 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000000 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,927 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000001 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,930 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000002 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,931 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000003 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,933 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000004 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,934 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000005 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,939 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000006 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,950 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> > >> >>> 2013-03-23 12:06:48,978 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Job
>> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> > >> >>> and 1
>> > >> >>> reduce tasks.
>> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> > >> >>> Adding
>> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> > >> >>> task_201303231139_0001_m_000008, for tracker
>> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> > >> >>> 2013-03-23 12:08:00,340 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Task
>> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> > >> >>> task_201303231139_0001_m_000008 successfully.
>> > >> >>> 2013-03-23 12:08:00,538 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,543 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:01,264 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>>
>> > >> >>>
>> > >> >>> The value in we excpect map to take is too huge
>> > >> >>> 1317624576693539401
>> > >> >>> bytes  !!!!!!!
>> > >> >>>
>> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> > >> >>> reduno1985@googlemail.com> wrote:
>> > >> >>>
>> > >> >>>> The estimated value that the hadoop compute is too huge for the
>> > >> >>>> simple
>> > >> >>>> example that i am running .
>> > >> >>>>
>> > >> >>>> ---------- Forwarded message ----------
>> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> This the output that I get I am running two machines  as you can
>> > >> >>>> see
>> > >> >>>> do
>> > >> >>>> u see anything suspicious ?
>> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> > >> >>>> DFS Used: 57344 (56 KB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> Under replicated blocks: 0
>> > >> >>>> Blocks with corrupt replicas: 0
>> > >> >>>> Missing blocks: 0
>> > >> >>>>
>> > >> >>>> -------------------------------------------------
>> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.6:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.31%
>> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.3:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.3%
>> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> > >> >>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>
>> > >> >>>>> Hi Redwane,
>> > >> >>>>>
>> > >> >>>>> Please run the following command as hdfs user on any datanode.
>> > >> >>>>> The
>> > >> >>>>> output will be something like this. Hope this helps
>> > >> >>>>>
>> > >> >>>>> hadoop dfsadmin -report
>> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> > >> >>>>> DFS Used: 480129024 (457.89 MB)
>> > >> >>>>> DFS Used%: 0.68%
>> > >> >>>>> Under replicated blocks: 0
>> > >> >>>>> Blocks with corrupt replicas: 0
>> > >> >>>>> Missing blocks: 0
>> > >> >>>>>
>> > >> >>>>> Thanks
>> > >> >>>>> -Abdelrahman
>> > >> >>>>>
>> > >> >>>>>
>> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> > >> >>>>> <re...@googlemail.com>wrote:
>> > >> >>>>>
>> > >> >>>>>>
>> > >> >>>>>> I have my hosts running on openstack virtual machine instances
>> > >> >>>>>> each
>> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> > >> >>>>>> space
>> > >> >>>>>> is in
>> > >> >>>>>> the hdfs without web ui .
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> Sent from Samsung Mobile
>> > >> >>>>>>
>> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> > >> >>>>>> Check web ui how much space you have on hdfs???
>> > >> >>>>>>
>> > >> >>>>>> Sent from my iPhone
>> > >> >>>>>>
>> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> > >> >>>>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>> Hi Redwane ,
>> > >> >>>>>>
>> > >> >>>>>> It is possible that the hosts which are running tasks are do
>> > >> >>>>>> not
>> > >> >>>>>> have
>> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> > >> >>>>>> reduno1985@googlemail.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> ---------- Forwarded message ----------
>> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> Hi
>> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
>> > >> >>>>>>> files
>> > >> >>>>>>> (<20
>> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> > >> >>>>>>> The jobtracker log file shows the following warning:
>> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> > >> >>>>>>> task.
>> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>> > >> >>>>>>> expect
>> > >> >>>>>>> map to
>> > >> >>take
>> > >> >>>>>>> 1317624576693539401
>> > >> >>>>>>>
>> > >> >>>>>>> Please help me ,
>> > >> >>>>>>> Best Regards,
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>
>> > >> >>>>>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>
>> > >> >
>> > >> >
>> > >> > Matteo Lanati
>> > >> > Distributed Resources Group
>> > >> > Leibniz-Rechenzentrum (LRZ)
>> > >> > Boltzmannstrasse 1
>> > >> > 85748 Garching b. München (Germany)
>> > >> > Phone: +49 89 35831 8724
>> > >>
>> > >>
>> > >>
>> > >> --
>> > >> Harsh J
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> > Harsh J
>> >
>>
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748   Garching b. München     (Germany)
>> Phone: +49 89 35831 8724
>>
>



-- 
Harsh J

Re:

Posted by Harsh J <ha...@cloudera.com>.
Azuryy,

1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
there's been a regression, can you comment that on the JIRA?

On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>
>
> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>
>> Hi Azuryy,
>>
>> thanks for the update. Sorry for the silly question, but where can I
>> download the patched version?
>> If I look into the closest mirror (i.e.
>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>> Thanks in advance,
>>
>> Matteo
>>
>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>> any security, and the problem is there.
>>
>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
>> > bug you facing now.
>> >
>> > --Send from my Sony mobile.
>> >
>> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>> > Thanks Harsh for the reply. I was confused too that why security is
>> > causing this.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>> > Shahab - I see he has mentioned generally that security is enabled
>> > (but not that it happens iff security is enabled), and the issue here
>> > doesn't have anything to do with security really.
>> >
>> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> > on the mapreduce-dev lists.
>> >
>> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> > > HI Harsh,
>> > >
>> > > Quick question though: why do you think it only happens if the OP
>> > > 'uses
>> > > security' as he mentioned?
>> > >
>> > > Regards,
>> > > Shahab
>> > >
>> > >
>> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> > >>
>> > >> Does smell like a bug as that number you get is simply
>> > >> Long.MAX_VALUE,
>> > >> or 8 exbibytes.
>> > >>
>> > >> Looking at the sources, this turns out to be a rather funny Java
>> > >> issue
>> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> > >> return in such a case). I've logged a bug report for this at
>> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> > >> reproducible case.
>> > >>
>> > >> Does this happen consistently for you?
>> > >>
>> > >> [1]
>> > >>
>> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> > >>
>> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> > >> wrote:
>> > >> > Hi all,
>> > >> >
>> > >> > I stumbled upon this problem as well while trying to run the
>> > >> > default
>> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> > >> > virtual
>> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> > >> > node is
>> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> > >> > file is
>> > >> > about 600 kB and the error is
>> > >> >
>> > >> > 2013-06-01 12:22:51,999 WARN
>> > >> > org.apache.hadoop.mapred.JobInProgress: No
>> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> > >> > but we
>> > >> > expect map to take 9223372036854775807
>> > >> >
>> > >> > The logfile is attached, together with the configuration files. The
>> > >> > version I'm using is
>> > >> >
>> > >> > Hadoop 1.2.0
>> > >> > Subversion
>> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>> > >> > -r
>> > >> > 1479473
>> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > >> > This command was run using
>> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> > >> >
>> > >> > If I run the default configuration (i.e. no securty), then the job
>> > >> > succeeds.
>> > >> >
>> > >> > Is there something missing in how I set up my nodes? How is it
>> > >> > possible
>> > >> > that the envisaged value for the needed space is so big?
>> > >> >
>> > >> > Thanks in advance.
>> > >> >
>> > >> > Matteo
>> > >> >
>> > >> >
>> > >> >
>> > >> >>Which version of Hadoop are you using. A quick search shows me a
>> > >> >> bug
>> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>> > >> >> show
>> > >> >>similar symptoms. However, that was fixed a long while ago.
>> > >> >>
>> > >> >>
>> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> > >> >>reduno1985@googlemail.com> wrote:
>> > >> >>
>> > >> >>> This the content of the jobtracker log file :
>> > >> >>> 2013-03-23 12:06:48,912 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Input
>> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
>> > >> >>> 7
>> > >> >>> 2013-03-23 12:06:48,925 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000000 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,927 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000001 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,930 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000002 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,931 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000003 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,933 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000004 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,934 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000005 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,939 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000006 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,950 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> > >> >>> 2013-03-23 12:06:48,978 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Job
>> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> > >> >>> and 1
>> > >> >>> reduce tasks.
>> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> > >> >>> Adding
>> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> > >> >>> task_201303231139_0001_m_000008, for tracker
>> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> > >> >>> 2013-03-23 12:08:00,340 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Task
>> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> > >> >>> task_201303231139_0001_m_000008 successfully.
>> > >> >>> 2013-03-23 12:08:00,538 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,543 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:01,264 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>>
>> > >> >>>
>> > >> >>> The value in we excpect map to take is too huge
>> > >> >>> 1317624576693539401
>> > >> >>> bytes  !!!!!!!
>> > >> >>>
>> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> > >> >>> reduno1985@googlemail.com> wrote:
>> > >> >>>
>> > >> >>>> The estimated value that the hadoop compute is too huge for the
>> > >> >>>> simple
>> > >> >>>> example that i am running .
>> > >> >>>>
>> > >> >>>> ---------- Forwarded message ----------
>> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> This the output that I get I am running two machines  as you can
>> > >> >>>> see
>> > >> >>>> do
>> > >> >>>> u see anything suspicious ?
>> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> > >> >>>> DFS Used: 57344 (56 KB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> Under replicated blocks: 0
>> > >> >>>> Blocks with corrupt replicas: 0
>> > >> >>>> Missing blocks: 0
>> > >> >>>>
>> > >> >>>> -------------------------------------------------
>> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.6:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.31%
>> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.3:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.3%
>> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> > >> >>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>
>> > >> >>>>> Hi Redwane,
>> > >> >>>>>
>> > >> >>>>> Please run the following command as hdfs user on any datanode.
>> > >> >>>>> The
>> > >> >>>>> output will be something like this. Hope this helps
>> > >> >>>>>
>> > >> >>>>> hadoop dfsadmin -report
>> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> > >> >>>>> DFS Used: 480129024 (457.89 MB)
>> > >> >>>>> DFS Used%: 0.68%
>> > >> >>>>> Under replicated blocks: 0
>> > >> >>>>> Blocks with corrupt replicas: 0
>> > >> >>>>> Missing blocks: 0
>> > >> >>>>>
>> > >> >>>>> Thanks
>> > >> >>>>> -Abdelrahman
>> > >> >>>>>
>> > >> >>>>>
>> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> > >> >>>>> <re...@googlemail.com>wrote:
>> > >> >>>>>
>> > >> >>>>>>
>> > >> >>>>>> I have my hosts running on openstack virtual machine instances
>> > >> >>>>>> each
>> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> > >> >>>>>> space
>> > >> >>>>>> is in
>> > >> >>>>>> the hdfs without web ui .
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> Sent from Samsung Mobile
>> > >> >>>>>>
>> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> > >> >>>>>> Check web ui how much space you have on hdfs???
>> > >> >>>>>>
>> > >> >>>>>> Sent from my iPhone
>> > >> >>>>>>
>> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> > >> >>>>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>> Hi Redwane ,
>> > >> >>>>>>
>> > >> >>>>>> It is possible that the hosts which are running tasks are do
>> > >> >>>>>> not
>> > >> >>>>>> have
>> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> > >> >>>>>> reduno1985@googlemail.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> ---------- Forwarded message ----------
>> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> Hi
>> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
>> > >> >>>>>>> files
>> > >> >>>>>>> (<20
>> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> > >> >>>>>>> The jobtracker log file shows the following warning:
>> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> > >> >>>>>>> task.
>> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>> > >> >>>>>>> expect
>> > >> >>>>>>> map to
>> > >> >>take
>> > >> >>>>>>> 1317624576693539401
>> > >> >>>>>>>
>> > >> >>>>>>> Please help me ,
>> > >> >>>>>>> Best Regards,
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>
>> > >> >>>>>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>
>> > >> >
>> > >> >
>> > >> > Matteo Lanati
>> > >> > Distributed Resources Group
>> > >> > Leibniz-Rechenzentrum (LRZ)
>> > >> > Boltzmannstrasse 1
>> > >> > 85748 Garching b. München (Germany)
>> > >> > Phone: +49 89 35831 8724
>> > >>
>> > >>
>> > >>
>> > >> --
>> > >> Harsh J
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> > Harsh J
>> >
>>
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748   Garching b. München     (Germany)
>> Phone: +49 89 35831 8724
>>
>



-- 
Harsh J

Re:

Posted by Harsh J <ha...@cloudera.com>.
Azuryy,

1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel
there's been a regression, can you comment that on the JIRA?

On Tue, Jun 4, 2013 at 6:57 AM, Azuryy Yu <az...@gmail.com> wrote:
> yes. hadoop-1.1.2 was released on Jan. 31st. just download it.
>
>
> On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:
>>
>> Hi Azuryy,
>>
>> thanks for the update. Sorry for the silly question, but where can I
>> download the patched version?
>> If I look into the closest mirror (i.e.
>> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the
>> Hadoop 1.1.2 version was last updated on Jan. 31st.
>> Thanks in advance,
>>
>> Matteo
>>
>> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
>> any security, and the problem is there.
>>
>> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
>> > bug you facing now.
>> >
>> > --Send from my Sony mobile.
>> >
>> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
>> > Thanks Harsh for the reply. I was confused too that why security is
>> > causing this.
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>> > Shahab - I see he has mentioned generally that security is enabled
>> > (but not that it happens iff security is enabled), and the issue here
>> > doesn't have anything to do with security really.
>> >
>> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> > on the mapreduce-dev lists.
>> >
>> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> > wrote:
>> > > HI Harsh,
>> > >
>> > > Quick question though: why do you think it only happens if the OP
>> > > 'uses
>> > > security' as he mentioned?
>> > >
>> > > Regards,
>> > > Shahab
>> > >
>> > >
>> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> > >>
>> > >> Does smell like a bug as that number you get is simply
>> > >> Long.MAX_VALUE,
>> > >> or 8 exbibytes.
>> > >>
>> > >> Looking at the sources, this turns out to be a rather funny Java
>> > >> issue
>> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> > >> return in such a case). I've logged a bug report for this at
>> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> > >> reproducible case.
>> > >>
>> > >> Does this happen consistently for you?
>> > >>
>> > >> [1]
>> > >>
>> > >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> > >>
>> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> > >> wrote:
>> > >> > Hi all,
>> > >> >
>> > >> > I stumbled upon this problem as well while trying to run the
>> > >> > default
>> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> > >> > virtual
>> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> > >> > node is
>> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> > >> > file is
>> > >> > about 600 kB and the error is
>> > >> >
>> > >> > 2013-06-01 12:22:51,999 WARN
>> > >> > org.apache.hadoop.mapred.JobInProgress: No
>> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> > >> > but we
>> > >> > expect map to take 9223372036854775807
>> > >> >
>> > >> > The logfile is attached, together with the configuration files. The
>> > >> > version I'm using is
>> > >> >
>> > >> > Hadoop 1.2.0
>> > >> > Subversion
>> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2
>> > >> > -r
>> > >> > 1479473
>> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > >> > This command was run using
>> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> > >> >
>> > >> > If I run the default configuration (i.e. no securty), then the job
>> > >> > succeeds.
>> > >> >
>> > >> > Is there something missing in how I set up my nodes? How is it
>> > >> > possible
>> > >> > that the envisaged value for the needed space is so big?
>> > >> >
>> > >> > Thanks in advance.
>> > >> >
>> > >> > Matteo
>> > >> >
>> > >> >
>> > >> >
>> > >> >>Which version of Hadoop are you using. A quick search shows me a
>> > >> >> bug
>> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
>> > >> >> show
>> > >> >>similar symptoms. However, that was fixed a long while ago.
>> > >> >>
>> > >> >>
>> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> > >> >>reduno1985@googlemail.com> wrote:
>> > >> >>
>> > >> >>> This the content of the jobtracker log file :
>> > >> >>> 2013-03-23 12:06:48,912 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Input
>> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits =
>> > >> >>> 7
>> > >> >>> 2013-03-23 12:06:48,925 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000000 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,927 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000001 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,930 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000002 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,931 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000003 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,933 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000004 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,934 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000005 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,939 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> tip:task_201303231139_0001_m_000006 has split on
>> > >> >>> node:/default-rack/hadoop0.novalocal
>> > >> >>> 2013-03-23 12:06:48,950 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> > >> >>> 2013-03-23 12:06:48,978 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Job
>> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> > >> >>> and 1
>> > >> >>> reduce tasks.
>> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> > >> >>> Adding
>> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> > >> >>> task_201303231139_0001_m_000008, for tracker
>> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> > >> >>> 2013-03-23 12:08:00,340 INFO
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> Task
>> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> > >> >>> task_201303231139_0001_m_000008 successfully.
>> > >> >>> 2013-03-23 12:08:00,538 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,543 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:00,544 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>> 2013-03-23 12:08:01,264 WARN
>> > >> >>> org.apache.hadoop.mapred.JobInProgress:
>> > >> >>> No
>> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> > >> >>> free;
>> > >> >>> but we
>> > >> >>> expect map to take 1317624576693539401
>> > >> >>>
>> > >> >>>
>> > >> >>> The value in we excpect map to take is too huge
>> > >> >>> 1317624576693539401
>> > >> >>> bytes  !!!!!!!
>> > >> >>>
>> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> > >> >>> reduno1985@googlemail.com> wrote:
>> > >> >>>
>> > >> >>>> The estimated value that the hadoop compute is too huge for the
>> > >> >>>> simple
>> > >> >>>> example that i am running .
>> > >> >>>>
>> > >> >>>> ---------- Forwarded message ----------
>> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> This the output that I get I am running two machines  as you can
>> > >> >>>> see
>> > >> >>>> do
>> > >> >>>> u see anything suspicious ?
>> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> > >> >>>> DFS Used: 57344 (56 KB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> Under replicated blocks: 0
>> > >> >>>> Blocks with corrupt replicas: 0
>> > >> >>>> Missing blocks: 0
>> > >> >>>>
>> > >> >>>> -------------------------------------------------
>> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.6:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.31%
>> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> Name: 11.1.0.3:50010
>> > >> >>>> Decommission Status : Normal
>> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> > >> >>>> DFS Used: 28672 (28 KB)
>> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> > >> >>>> DFS Used%: 0%
>> > >> >>>> DFS Remaining%: 83.3%
>> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> > >> >>>>
>> > >> >>>>
>> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> > >> >>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>
>> > >> >>>>> Hi Redwane,
>> > >> >>>>>
>> > >> >>>>> Please run the following command as hdfs user on any datanode.
>> > >> >>>>> The
>> > >> >>>>> output will be something like this. Hope this helps
>> > >> >>>>>
>> > >> >>>>> hadoop dfsadmin -report
>> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> > >> >>>>> DFS Used: 480129024 (457.89 MB)
>> > >> >>>>> DFS Used%: 0.68%
>> > >> >>>>> Under replicated blocks: 0
>> > >> >>>>> Blocks with corrupt replicas: 0
>> > >> >>>>> Missing blocks: 0
>> > >> >>>>>
>> > >> >>>>> Thanks
>> > >> >>>>> -Abdelrahman
>> > >> >>>>>
>> > >> >>>>>
>> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> > >> >>>>> <re...@googlemail.com>wrote:
>> > >> >>>>>
>> > >> >>>>>>
>> > >> >>>>>> I have my hosts running on openstack virtual machine instances
>> > >> >>>>>> each
>> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> > >> >>>>>> space
>> > >> >>>>>> is in
>> > >> >>>>>> the hdfs without web ui .
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> Sent from Samsung Mobile
>> > >> >>>>>>
>> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> > >> >>>>>> Check web ui how much space you have on hdfs???
>> > >> >>>>>>
>> > >> >>>>>> Sent from my iPhone
>> > >> >>>>>>
>> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> > >> >>>>>> ashettia@hortonworks.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>> Hi Redwane ,
>> > >> >>>>>>
>> > >> >>>>>> It is possible that the hosts which are running tasks are do
>> > >> >>>>>> not
>> > >> >>>>>> have
>> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>>
>> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> > >> >>>>>> reduno1985@googlemail.com> wrote:
>> > >> >>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> ---------- Forwarded message ----------
>> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>> Hi
>> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several
>> > >> >>>>>>> files
>> > >> >>>>>>> (<20
>> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> > >> >>>>>>> The jobtracker log file shows the following warning:
>> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> > >> >>>>>>> task.
>> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
>> > >> >>>>>>> expect
>> > >> >>>>>>> map to
>> > >> >>take
>> > >> >>>>>>> 1317624576693539401
>> > >> >>>>>>>
>> > >> >>>>>>> Please help me ,
>> > >> >>>>>>> Best Regards,
>> > >> >>>>>>>
>> > >> >>>>>>>
>> > >> >>>>>>
>> > >> >>>>>
>> > >> >>>>
>> > >> >>>>
>> > >> >>>
>> > >> >
>> > >> >
>> > >> > Matteo Lanati
>> > >> > Distributed Resources Group
>> > >> > Leibniz-Rechenzentrum (LRZ)
>> > >> > Boltzmannstrasse 1
>> > >> > 85748 Garching b. München (Germany)
>> > >> > Phone: +49 89 35831 8724
>> > >>
>> > >>
>> > >>
>> > >> --
>> > >> Harsh J
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> > Harsh J
>> >
>>
>> Matteo Lanati
>> Distributed Resources Group
>> Leibniz-Rechenzentrum (LRZ)
>> Boltzmannstrasse 1
>> 85748   Garching b. München     (Germany)
>> Phone: +49 89 35831 8724
>>
>



-- 
Harsh J

Re:

Posted by Azuryy Yu <az...@gmail.com>.
yes. hadoop-1.1.2 was released on Jan. 31st. just download it.


On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi Azuryy,
>
> thanks for the update. Sorry for the silly question, but where can I
> download the patched version?
> If I look into the closest mirror (i.e.
> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the Hadoop 1.1.2 version was last updated on Jan. 31st.
> Thanks in advance,
>
> Matteo
>
> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> any security, and the problem is there.
>
> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>
> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> bug you facing now.
> >
> > --Send from my Sony mobile.
> >
> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> > Thanks Harsh for the reply. I was confused too that why security is
> causing this.
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> > Shahab - I see he has mentioned generally that security is enabled
> > (but not that it happens iff security is enabled), and the issue here
> > doesn't have anything to do with security really.
> >
> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> > on the mapreduce-dev lists.
> >
> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > > HI Harsh,
> > >
> > > Quick question though: why do you think it only happens if the OP 'uses
> > > security' as he mentioned?
> > >
> > > Regards,
> > > Shahab
> > >
> > >
> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> > >>
> > >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> > >> or 8 exbibytes.
> > >>
> > >> Looking at the sources, this turns out to be a rather funny Java issue
> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> > >> return in such a case). I've logged a bug report for this at
> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> > >> reproducible case.
> > >>
> > >> Does this happen consistently for you?
> > >>
> > >> [1]
> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> > >>
> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> > >> wrote:
> > >> > Hi all,
> > >> >
> > >> > I stumbled upon this problem as well while trying to run the default
> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> > >> > about 600 kB and the error is
> > >> >
> > >> > 2013-06-01 12:22:51,999 WARN
> org.apache.hadoop.mapred.JobInProgress: No
> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> but we
> > >> > expect map to take 9223372036854775807
> > >> >
> > >> > The logfile is attached, together with the configuration files. The
> > >> > version I'm using is
> > >> >
> > >> > Hadoop 1.2.0
> > >> > Subversion
> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
> > >> > 1479473
> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > >> > This command was run using
> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> > >> >
> > >> > If I run the default configuration (i.e. no securty), then the job
> > >> > succeeds.
> > >> >
> > >> > Is there something missing in how I set up my nodes? How is it
> possible
> > >> > that the envisaged value for the needed space is so big?
> > >> >
> > >> > Thanks in advance.
> > >> >
> > >> > Matteo
> > >> >
> > >> >
> > >> >
> > >> >>Which version of Hadoop are you using. A quick search shows me a bug
> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> show
> > >> >>similar symptoms. However, that was fixed a long while ago.
> > >> >>
> > >> >>
> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> > >> >>reduno1985@googlemail.com> wrote:
> > >> >>
> > >> >>> This the content of the jobtracker log file :
> > >> >>> 2013-03-23 12:06:48,912 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Input
> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> > >> >>> 2013-03-23 12:06:48,925 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,927 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,930 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,931 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,933 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,934 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,939 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,950 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> > >> >>> 2013-03-23 12:06:48,978 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Job
> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> and 1
> > >> >>> reduce tasks.
> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> > >> >>> Adding
> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> > >> >>> task_201303231139_0001_m_000008, for tracker
> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> > >> >>> 2013-03-23 12:08:00,340 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Task
> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> > >> >>> task_201303231139_0001_m_000008 successfully.
> > >> >>> 2013-03-23 12:08:00,538 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,543 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:01,264 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>>
> > >> >>>
> > >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> > >> >>> bytes  !!!!!!!
> > >> >>>
> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> > >> >>> reduno1985@googlemail.com> wrote:
> > >> >>>
> > >> >>>> The estimated value that the hadoop compute is too huge for the
> > >> >>>> simple
> > >> >>>> example that i am running .
> > >> >>>>
> > >> >>>> ---------- Forwarded message ----------
> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> > >> >>>>
> > >> >>>>
> > >> >>>> This the output that I get I am running two machines  as you can
> see
> > >> >>>> do
> > >> >>>> u see anything suspicious ?
> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> > >> >>>> DFS Used: 57344 (56 KB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> Under replicated blocks: 0
> > >> >>>> Blocks with corrupt replicas: 0
> > >> >>>> Missing blocks: 0
> > >> >>>>
> > >> >>>> -------------------------------------------------
> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> > >> >>>>
> > >> >>>> Name: 11.1.0.6:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.31%
> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> Name: 11.1.0.3:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.3%
> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> > >> >>>> ashettia@hortonworks.com> wrote:
> > >> >>>>
> > >> >>>>> Hi Redwane,
> > >> >>>>>
> > >> >>>>> Please run the following command as hdfs user on any datanode.
> The
> > >> >>>>> output will be something like this. Hope this helps
> > >> >>>>>
> > >> >>>>> hadoop dfsadmin -report
> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> > >> >>>>> DFS Used%: 0.68%
> > >> >>>>> Under replicated blocks: 0
> > >> >>>>> Blocks with corrupt replicas: 0
> > >> >>>>> Missing blocks: 0
> > >> >>>>>
> > >> >>>>> Thanks
> > >> >>>>> -Abdelrahman
> > >> >>>>>
> > >> >>>>>
> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> > >> >>>>> <re...@googlemail.com>wrote:
> > >> >>>>>
> > >> >>>>>>
> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> > >> >>>>>> is in
> > >> >>>>>> the hdfs without web ui .
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> Sent from Samsung Mobile
> > >> >>>>>>
> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> > >> >>>>>> Check web ui how much space you have on hdfs???
> > >> >>>>>>
> > >> >>>>>> Sent from my iPhone
> > >> >>>>>>
> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> > >> >>>>>> ashettia@hortonworks.com> wrote:
> > >> >>>>>>
> > >> >>>>>> Hi Redwane ,
> > >> >>>>>>
> > >> >>>>>> It is possible that the hosts which are running tasks are do
> not
> > >> >>>>>> have
> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> > >> >>>>>> reduno1985@googlemail.com> wrote:
> > >> >>>>>>
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> ---------- Forwarded message ----------
> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> Hi
> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> > >> >>>>>>> (<20
> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> > >> >>>>>>> The jobtracker log file shows the following warning:
> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> > >> >>>>>>> task.
> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> expect
> > >> >>>>>>> map to
> > >> >>take
> > >> >>>>>>> 1317624576693539401
> > >> >>>>>>>
> > >> >>>>>>> Please help me ,
> > >> >>>>>>> Best Regards,
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>
> > >> >>>>>
> > >> >>>>
> > >> >>>>
> > >> >>>
> > >> >
> > >> >
> > >> > Matteo Lanati
> > >> > Distributed Resources Group
> > >> > Leibniz-Rechenzentrum (LRZ)
> > >> > Boltzmannstrasse 1
> > >> > 85748 Garching b. München (Germany)
> > >> > Phone: +49 89 35831 8724
> > >>
> > >>
> > >>
> > >> --
> > >> Harsh J
> > >
> > >
> >
> >
> >
> > --
> > Harsh J
> >
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748   Garching b. München     (Germany)
> Phone: +49 89 35831 8724
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
yes. hadoop-1.1.2 was released on Jan. 31st. just download it.


On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi Azuryy,
>
> thanks for the update. Sorry for the silly question, but where can I
> download the patched version?
> If I look into the closest mirror (i.e.
> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the Hadoop 1.1.2 version was last updated on Jan. 31st.
> Thanks in advance,
>
> Matteo
>
> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> any security, and the problem is there.
>
> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>
> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> bug you facing now.
> >
> > --Send from my Sony mobile.
> >
> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> > Thanks Harsh for the reply. I was confused too that why security is
> causing this.
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> > Shahab - I see he has mentioned generally that security is enabled
> > (but not that it happens iff security is enabled), and the issue here
> > doesn't have anything to do with security really.
> >
> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> > on the mapreduce-dev lists.
> >
> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > > HI Harsh,
> > >
> > > Quick question though: why do you think it only happens if the OP 'uses
> > > security' as he mentioned?
> > >
> > > Regards,
> > > Shahab
> > >
> > >
> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> > >>
> > >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> > >> or 8 exbibytes.
> > >>
> > >> Looking at the sources, this turns out to be a rather funny Java issue
> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> > >> return in such a case). I've logged a bug report for this at
> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> > >> reproducible case.
> > >>
> > >> Does this happen consistently for you?
> > >>
> > >> [1]
> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> > >>
> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> > >> wrote:
> > >> > Hi all,
> > >> >
> > >> > I stumbled upon this problem as well while trying to run the default
> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> > >> > about 600 kB and the error is
> > >> >
> > >> > 2013-06-01 12:22:51,999 WARN
> org.apache.hadoop.mapred.JobInProgress: No
> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> but we
> > >> > expect map to take 9223372036854775807
> > >> >
> > >> > The logfile is attached, together with the configuration files. The
> > >> > version I'm using is
> > >> >
> > >> > Hadoop 1.2.0
> > >> > Subversion
> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
> > >> > 1479473
> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > >> > This command was run using
> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> > >> >
> > >> > If I run the default configuration (i.e. no securty), then the job
> > >> > succeeds.
> > >> >
> > >> > Is there something missing in how I set up my nodes? How is it
> possible
> > >> > that the envisaged value for the needed space is so big?
> > >> >
> > >> > Thanks in advance.
> > >> >
> > >> > Matteo
> > >> >
> > >> >
> > >> >
> > >> >>Which version of Hadoop are you using. A quick search shows me a bug
> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> show
> > >> >>similar symptoms. However, that was fixed a long while ago.
> > >> >>
> > >> >>
> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> > >> >>reduno1985@googlemail.com> wrote:
> > >> >>
> > >> >>> This the content of the jobtracker log file :
> > >> >>> 2013-03-23 12:06:48,912 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Input
> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> > >> >>> 2013-03-23 12:06:48,925 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,927 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,930 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,931 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,933 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,934 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,939 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,950 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> > >> >>> 2013-03-23 12:06:48,978 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Job
> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> and 1
> > >> >>> reduce tasks.
> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> > >> >>> Adding
> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> > >> >>> task_201303231139_0001_m_000008, for tracker
> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> > >> >>> 2013-03-23 12:08:00,340 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Task
> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> > >> >>> task_201303231139_0001_m_000008 successfully.
> > >> >>> 2013-03-23 12:08:00,538 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,543 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:01,264 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>>
> > >> >>>
> > >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> > >> >>> bytes  !!!!!!!
> > >> >>>
> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> > >> >>> reduno1985@googlemail.com> wrote:
> > >> >>>
> > >> >>>> The estimated value that the hadoop compute is too huge for the
> > >> >>>> simple
> > >> >>>> example that i am running .
> > >> >>>>
> > >> >>>> ---------- Forwarded message ----------
> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> > >> >>>>
> > >> >>>>
> > >> >>>> This the output that I get I am running two machines  as you can
> see
> > >> >>>> do
> > >> >>>> u see anything suspicious ?
> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> > >> >>>> DFS Used: 57344 (56 KB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> Under replicated blocks: 0
> > >> >>>> Blocks with corrupt replicas: 0
> > >> >>>> Missing blocks: 0
> > >> >>>>
> > >> >>>> -------------------------------------------------
> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> > >> >>>>
> > >> >>>> Name: 11.1.0.6:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.31%
> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> Name: 11.1.0.3:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.3%
> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> > >> >>>> ashettia@hortonworks.com> wrote:
> > >> >>>>
> > >> >>>>> Hi Redwane,
> > >> >>>>>
> > >> >>>>> Please run the following command as hdfs user on any datanode.
> The
> > >> >>>>> output will be something like this. Hope this helps
> > >> >>>>>
> > >> >>>>> hadoop dfsadmin -report
> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> > >> >>>>> DFS Used%: 0.68%
> > >> >>>>> Under replicated blocks: 0
> > >> >>>>> Blocks with corrupt replicas: 0
> > >> >>>>> Missing blocks: 0
> > >> >>>>>
> > >> >>>>> Thanks
> > >> >>>>> -Abdelrahman
> > >> >>>>>
> > >> >>>>>
> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> > >> >>>>> <re...@googlemail.com>wrote:
> > >> >>>>>
> > >> >>>>>>
> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> > >> >>>>>> is in
> > >> >>>>>> the hdfs without web ui .
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> Sent from Samsung Mobile
> > >> >>>>>>
> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> > >> >>>>>> Check web ui how much space you have on hdfs???
> > >> >>>>>>
> > >> >>>>>> Sent from my iPhone
> > >> >>>>>>
> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> > >> >>>>>> ashettia@hortonworks.com> wrote:
> > >> >>>>>>
> > >> >>>>>> Hi Redwane ,
> > >> >>>>>>
> > >> >>>>>> It is possible that the hosts which are running tasks are do
> not
> > >> >>>>>> have
> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> > >> >>>>>> reduno1985@googlemail.com> wrote:
> > >> >>>>>>
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> ---------- Forwarded message ----------
> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> Hi
> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> > >> >>>>>>> (<20
> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> > >> >>>>>>> The jobtracker log file shows the following warning:
> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> > >> >>>>>>> task.
> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> expect
> > >> >>>>>>> map to
> > >> >>take
> > >> >>>>>>> 1317624576693539401
> > >> >>>>>>>
> > >> >>>>>>> Please help me ,
> > >> >>>>>>> Best Regards,
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>
> > >> >>>>>
> > >> >>>>
> > >> >>>>
> > >> >>>
> > >> >
> > >> >
> > >> > Matteo Lanati
> > >> > Distributed Resources Group
> > >> > Leibniz-Rechenzentrum (LRZ)
> > >> > Boltzmannstrasse 1
> > >> > 85748 Garching b. München (Germany)
> > >> > Phone: +49 89 35831 8724
> > >>
> > >>
> > >>
> > >> --
> > >> Harsh J
> > >
> > >
> >
> >
> >
> > --
> > Harsh J
> >
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748   Garching b. München     (Germany)
> Phone: +49 89 35831 8724
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
yes. hadoop-1.1.2 was released on Jan. 31st. just download it.


On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi Azuryy,
>
> thanks for the update. Sorry for the silly question, but where can I
> download the patched version?
> If I look into the closest mirror (i.e.
> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the Hadoop 1.1.2 version was last updated on Jan. 31st.
> Thanks in advance,
>
> Matteo
>
> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> any security, and the problem is there.
>
> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>
> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> bug you facing now.
> >
> > --Send from my Sony mobile.
> >
> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> > Thanks Harsh for the reply. I was confused too that why security is
> causing this.
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> > Shahab - I see he has mentioned generally that security is enabled
> > (but not that it happens iff security is enabled), and the issue here
> > doesn't have anything to do with security really.
> >
> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> > on the mapreduce-dev lists.
> >
> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > > HI Harsh,
> > >
> > > Quick question though: why do you think it only happens if the OP 'uses
> > > security' as he mentioned?
> > >
> > > Regards,
> > > Shahab
> > >
> > >
> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> > >>
> > >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> > >> or 8 exbibytes.
> > >>
> > >> Looking at the sources, this turns out to be a rather funny Java issue
> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> > >> return in such a case). I've logged a bug report for this at
> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> > >> reproducible case.
> > >>
> > >> Does this happen consistently for you?
> > >>
> > >> [1]
> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> > >>
> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> > >> wrote:
> > >> > Hi all,
> > >> >
> > >> > I stumbled upon this problem as well while trying to run the default
> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> > >> > about 600 kB and the error is
> > >> >
> > >> > 2013-06-01 12:22:51,999 WARN
> org.apache.hadoop.mapred.JobInProgress: No
> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> but we
> > >> > expect map to take 9223372036854775807
> > >> >
> > >> > The logfile is attached, together with the configuration files. The
> > >> > version I'm using is
> > >> >
> > >> > Hadoop 1.2.0
> > >> > Subversion
> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
> > >> > 1479473
> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > >> > This command was run using
> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> > >> >
> > >> > If I run the default configuration (i.e. no securty), then the job
> > >> > succeeds.
> > >> >
> > >> > Is there something missing in how I set up my nodes? How is it
> possible
> > >> > that the envisaged value for the needed space is so big?
> > >> >
> > >> > Thanks in advance.
> > >> >
> > >> > Matteo
> > >> >
> > >> >
> > >> >
> > >> >>Which version of Hadoop are you using. A quick search shows me a bug
> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> show
> > >> >>similar symptoms. However, that was fixed a long while ago.
> > >> >>
> > >> >>
> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> > >> >>reduno1985@googlemail.com> wrote:
> > >> >>
> > >> >>> This the content of the jobtracker log file :
> > >> >>> 2013-03-23 12:06:48,912 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Input
> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> > >> >>> 2013-03-23 12:06:48,925 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,927 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,930 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,931 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,933 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,934 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,939 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,950 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> > >> >>> 2013-03-23 12:06:48,978 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Job
> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> and 1
> > >> >>> reduce tasks.
> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> > >> >>> Adding
> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> > >> >>> task_201303231139_0001_m_000008, for tracker
> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> > >> >>> 2013-03-23 12:08:00,340 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Task
> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> > >> >>> task_201303231139_0001_m_000008 successfully.
> > >> >>> 2013-03-23 12:08:00,538 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,543 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:01,264 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>>
> > >> >>>
> > >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> > >> >>> bytes  !!!!!!!
> > >> >>>
> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> > >> >>> reduno1985@googlemail.com> wrote:
> > >> >>>
> > >> >>>> The estimated value that the hadoop compute is too huge for the
> > >> >>>> simple
> > >> >>>> example that i am running .
> > >> >>>>
> > >> >>>> ---------- Forwarded message ----------
> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> > >> >>>>
> > >> >>>>
> > >> >>>> This the output that I get I am running two machines  as you can
> see
> > >> >>>> do
> > >> >>>> u see anything suspicious ?
> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> > >> >>>> DFS Used: 57344 (56 KB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> Under replicated blocks: 0
> > >> >>>> Blocks with corrupt replicas: 0
> > >> >>>> Missing blocks: 0
> > >> >>>>
> > >> >>>> -------------------------------------------------
> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> > >> >>>>
> > >> >>>> Name: 11.1.0.6:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.31%
> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> Name: 11.1.0.3:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.3%
> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> > >> >>>> ashettia@hortonworks.com> wrote:
> > >> >>>>
> > >> >>>>> Hi Redwane,
> > >> >>>>>
> > >> >>>>> Please run the following command as hdfs user on any datanode.
> The
> > >> >>>>> output will be something like this. Hope this helps
> > >> >>>>>
> > >> >>>>> hadoop dfsadmin -report
> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> > >> >>>>> DFS Used%: 0.68%
> > >> >>>>> Under replicated blocks: 0
> > >> >>>>> Blocks with corrupt replicas: 0
> > >> >>>>> Missing blocks: 0
> > >> >>>>>
> > >> >>>>> Thanks
> > >> >>>>> -Abdelrahman
> > >> >>>>>
> > >> >>>>>
> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> > >> >>>>> <re...@googlemail.com>wrote:
> > >> >>>>>
> > >> >>>>>>
> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> > >> >>>>>> is in
> > >> >>>>>> the hdfs without web ui .
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> Sent from Samsung Mobile
> > >> >>>>>>
> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> > >> >>>>>> Check web ui how much space you have on hdfs???
> > >> >>>>>>
> > >> >>>>>> Sent from my iPhone
> > >> >>>>>>
> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> > >> >>>>>> ashettia@hortonworks.com> wrote:
> > >> >>>>>>
> > >> >>>>>> Hi Redwane ,
> > >> >>>>>>
> > >> >>>>>> It is possible that the hosts which are running tasks are do
> not
> > >> >>>>>> have
> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> > >> >>>>>> reduno1985@googlemail.com> wrote:
> > >> >>>>>>
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> ---------- Forwarded message ----------
> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> Hi
> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> > >> >>>>>>> (<20
> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> > >> >>>>>>> The jobtracker log file shows the following warning:
> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> > >> >>>>>>> task.
> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> expect
> > >> >>>>>>> map to
> > >> >>take
> > >> >>>>>>> 1317624576693539401
> > >> >>>>>>>
> > >> >>>>>>> Please help me ,
> > >> >>>>>>> Best Regards,
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>
> > >> >>>>>
> > >> >>>>
> > >> >>>>
> > >> >>>
> > >> >
> > >> >
> > >> > Matteo Lanati
> > >> > Distributed Resources Group
> > >> > Leibniz-Rechenzentrum (LRZ)
> > >> > Boltzmannstrasse 1
> > >> > 85748 Garching b. München (Germany)
> > >> > Phone: +49 89 35831 8724
> > >>
> > >>
> > >>
> > >> --
> > >> Harsh J
> > >
> > >
> >
> >
> >
> > --
> > Harsh J
> >
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748   Garching b. München     (Germany)
> Phone: +49 89 35831 8724
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
yes. hadoop-1.1.2 was released on Jan. 31st. just download it.


On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi Azuryy,
>
> thanks for the update. Sorry for the silly question, but where can I
> download the patched version?
> If I look into the closest mirror (i.e.
> http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that
> the Hadoop 1.1.2 version was last updated on Jan. 31st.
> Thanks in advance,
>
> Matteo
>
> PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without
> any security, and the problem is there.
>
> On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:
>
> > can you upgrade to 1.1.2, which is also a stable release, and fixed the
> bug you facing now.
> >
> > --Send from my Sony mobile.
> >
> > On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> > Thanks Harsh for the reply. I was confused too that why security is
> causing this.
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> > Shahab - I see he has mentioned generally that security is enabled
> > (but not that it happens iff security is enabled), and the issue here
> > doesn't have anything to do with security really.
> >
> > Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> > on the mapreduce-dev lists.
> >
> > On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > > HI Harsh,
> > >
> > > Quick question though: why do you think it only happens if the OP 'uses
> > > security' as he mentioned?
> > >
> > > Regards,
> > > Shahab
> > >
> > >
> > > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> > >>
> > >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> > >> or 8 exbibytes.
> > >>
> > >> Looking at the sources, this turns out to be a rather funny Java issue
> > >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> > >> return in such a case). I've logged a bug report for this at
> > >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> > >> reproducible case.
> > >>
> > >> Does this happen consistently for you?
> > >>
> > >> [1]
> > >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> > >>
> > >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> > >> wrote:
> > >> > Hi all,
> > >> >
> > >> > I stumbled upon this problem as well while trying to run the default
> > >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> > >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> > >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> > >> > about 600 kB and the error is
> > >> >
> > >> > 2013-06-01 12:22:51,999 WARN
> org.apache.hadoop.mapred.JobInProgress: No
> > >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
> but we
> > >> > expect map to take 9223372036854775807
> > >> >
> > >> > The logfile is attached, together with the configuration files. The
> > >> > version I'm using is
> > >> >
> > >> > Hadoop 1.2.0
> > >> > Subversion
> > >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
> > >> > 1479473
> > >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > >> > This command was run using
> > >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> > >> >
> > >> > If I run the default configuration (i.e. no securty), then the job
> > >> > succeeds.
> > >> >
> > >> > Is there something missing in how I set up my nodes? How is it
> possible
> > >> > that the envisaged value for the needed space is so big?
> > >> >
> > >> > Thanks in advance.
> > >> >
> > >> > Matteo
> > >> >
> > >> >
> > >> >
> > >> >>Which version of Hadoop are you using. A quick search shows me a bug
> > >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to
> show
> > >> >>similar symptoms. However, that was fixed a long while ago.
> > >> >>
> > >> >>
> > >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> > >> >>reduno1985@googlemail.com> wrote:
> > >> >>
> > >> >>> This the content of the jobtracker log file :
> > >> >>> 2013-03-23 12:06:48,912 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Input
> > >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> > >> >>> 2013-03-23 12:06:48,925 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000000 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,927 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000001 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,930 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000002 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,931 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000003 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,933 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000004 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,934 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000005 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,939 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> tip:task_201303231139_0001_m_000006 has split on
> > >> >>> node:/default-rack/hadoop0.novalocal
> > >> >>> 2013-03-23 12:06:48,950 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> > >> >>> 2013-03-23 12:06:48,978 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Job
> > >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
> and 1
> > >> >>> reduce tasks.
> > >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> > >> >>> Adding
> > >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> > >> >>> task_201303231139_0001_m_000008, for tracker
> > >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> > >> >>> 2013-03-23 12:08:00,340 INFO
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> Task
> > >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> > >> >>> task_201303231139_0001_m_000008 successfully.
> > >> >>> 2013-03-23 12:08:00,538 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,543 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:00,544 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>> 2013-03-23 12:08:01,264 WARN
> org.apache.hadoop.mapred.JobInProgress:
> > >> >>> No
> > >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
> free;
> > >> >>> but we
> > >> >>> expect map to take 1317624576693539401
> > >> >>>
> > >> >>>
> > >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> > >> >>> bytes  !!!!!!!
> > >> >>>
> > >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> > >> >>> reduno1985@googlemail.com> wrote:
> > >> >>>
> > >> >>>> The estimated value that the hadoop compute is too huge for the
> > >> >>>> simple
> > >> >>>> example that i am running .
> > >> >>>>
> > >> >>>> ---------- Forwarded message ----------
> > >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> > >> >>>> Subject: Re: About running a simple wordcount mapreduce
> > >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> > >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> > >> >>>>
> > >> >>>>
> > >> >>>> This the output that I get I am running two machines  as you can
> see
> > >> >>>> do
> > >> >>>> u see anything suspicious ?
> > >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> > >> >>>> Present Capacity: 17615499264 (16.41 GB)
> > >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> > >> >>>> DFS Used: 57344 (56 KB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> Under replicated blocks: 0
> > >> >>>> Blocks with corrupt replicas: 0
> > >> >>>> Missing blocks: 0
> > >> >>>>
> > >> >>>> -------------------------------------------------
> > >> >>>> Datanodes available: 2 (2 total, 0 dead)
> > >> >>>>
> > >> >>>> Name: 11.1.0.6:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> > >> >>>> DFS Remaining: 8807800832(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.31%
> > >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> Name: 11.1.0.3:50010
> > >> >>>> Decommission Status : Normal
> > >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> > >> >>>> DFS Used: 28672 (28 KB)
> > >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> > >> >>>> DFS Remaining: 8807641088(8.2 GB)
> > >> >>>> DFS Used%: 0%
> > >> >>>> DFS Remaining%: 83.3%
> > >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> > >> >>>>
> > >> >>>>
> > >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> > >> >>>> ashettia@hortonworks.com> wrote:
> > >> >>>>
> > >> >>>>> Hi Redwane,
> > >> >>>>>
> > >> >>>>> Please run the following command as hdfs user on any datanode.
> The
> > >> >>>>> output will be something like this. Hope this helps
> > >> >>>>>
> > >> >>>>> hadoop dfsadmin -report
> > >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> > >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> > >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> > >> >>>>> DFS Used: 480129024 (457.89 MB)
> > >> >>>>> DFS Used%: 0.68%
> > >> >>>>> Under replicated blocks: 0
> > >> >>>>> Blocks with corrupt replicas: 0
> > >> >>>>> Missing blocks: 0
> > >> >>>>>
> > >> >>>>> Thanks
> > >> >>>>> -Abdelrahman
> > >> >>>>>
> > >> >>>>>
> > >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> > >> >>>>> <re...@googlemail.com>wrote:
> > >> >>>>>
> > >> >>>>>>
> > >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> > >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> > >> >>>>>> is in
> > >> >>>>>> the hdfs without web ui .
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> Sent from Samsung Mobile
> > >> >>>>>>
> > >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> > >> >>>>>> Check web ui how much space you have on hdfs???
> > >> >>>>>>
> > >> >>>>>> Sent from my iPhone
> > >> >>>>>>
> > >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> > >> >>>>>> ashettia@hortonworks.com> wrote:
> > >> >>>>>>
> > >> >>>>>> Hi Redwane ,
> > >> >>>>>>
> > >> >>>>>> It is possible that the hosts which are running tasks are do
> not
> > >> >>>>>> have
> > >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>>
> > >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> > >> >>>>>> reduno1985@googlemail.com> wrote:
> > >> >>>>>>
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> ---------- Forwarded message ----------
> > >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> > >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> > >> >>>>>>> Subject: About running a simple wordcount mapreduce
> > >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>> Hi
> > >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> > >> >>>>>>> (<20
> > >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> > >> >>>>>>> The jobtracker log file shows the following warning:
> > >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> > >> >>>>>>> task.
> > >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we
> expect
> > >> >>>>>>> map to
> > >> >>take
> > >> >>>>>>> 1317624576693539401
> > >> >>>>>>>
> > >> >>>>>>> Please help me ,
> > >> >>>>>>> Best Regards,
> > >> >>>>>>>
> > >> >>>>>>>
> > >> >>>>>>
> > >> >>>>>
> > >> >>>>
> > >> >>>>
> > >> >>>
> > >> >
> > >> >
> > >> > Matteo Lanati
> > >> > Distributed Resources Group
> > >> > Leibniz-Rechenzentrum (LRZ)
> > >> > Boltzmannstrasse 1
> > >> > 85748 Garching b. München (Germany)
> > >> > Phone: +49 89 35831 8724
> > >>
> > >>
> > >>
> > >> --
> > >> Harsh J
> > >
> > >
> >
> >
> >
> > --
> > Harsh J
> >
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748   Garching b. München     (Germany)
> Phone: +49 89 35831 8724
>
>

Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Azuryy,

thanks for the update. Sorry for the silly question, but where can I download the patched version?
If I look into the closest mirror (i.e. http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the Hadoop 1.1.2 version was last updated on Jan. 31st.
Thanks in advance,

Matteo

PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without any security, and the problem is there.

On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you upgrade to 1.1.2, which is also a stable release, and fixed the bug you facing now.
> 
> --Send from my Sony mobile.
> 
> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> Thanks Harsh for the reply. I was confused too that why security is causing this.
> 
> Regards,
> Shahab
> 
> 
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
> 
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
> 
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge   1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Azuryy,

thanks for the update. Sorry for the silly question, but where can I download the patched version?
If I look into the closest mirror (i.e. http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the Hadoop 1.1.2 version was last updated on Jan. 31st.
Thanks in advance,

Matteo

PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without any security, and the problem is there.

On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you upgrade to 1.1.2, which is also a stable release, and fixed the bug you facing now.
> 
> --Send from my Sony mobile.
> 
> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> Thanks Harsh for the reply. I was confused too that why security is causing this.
> 
> Regards,
> Shahab
> 
> 
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
> 
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
> 
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge   1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Azuryy,

thanks for the update. Sorry for the silly question, but where can I download the patched version?
If I look into the closest mirror (i.e. http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the Hadoop 1.1.2 version was last updated on Jan. 31st.
Thanks in advance,

Matteo

PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without any security, and the problem is there.

On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you upgrade to 1.1.2, which is also a stable release, and fixed the bug you facing now.
> 
> --Send from my Sony mobile.
> 
> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> Thanks Harsh for the reply. I was confused too that why security is causing this.
> 
> Regards,
> Shahab
> 
> 
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
> 
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
> 
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge   1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Azuryy,

thanks for the update. Sorry for the silly question, but where can I download the patched version?
If I look into the closest mirror (i.e. http://mirror.netcologne.de/apache.org/hadoop/common/), I can see that the Hadoop 1.1.2 version was last updated on Jan. 31st.
Thanks in advance,

Matteo

PS: just to confirm that I tried a minimal Hadoop 1.2.0 setup, so without any security, and the problem is there.

On Jun 3, 2013, at 3:02 PM, Azuryy Yu <az...@gmail.com> wrote:

> can you upgrade to 1.1.2, which is also a stable release, and fixed the bug you facing now.
> 
> --Send from my Sony mobile.
> 
> On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:
> Thanks Harsh for the reply. I was confused too that why security is causing this.
> 
> Regards,
> Shahab
> 
> 
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
> 
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
> 
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge   1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
> 
> 
> 
> --
> Harsh J
> 

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748	Garching b. München	(Germany)
Phone: +49 89 35831 8724


Re:

Posted by Azuryy Yu <az...@gmail.com>.
can you upgrade to 1.1.2, which is also a stable release, and fixed the bug
you facing now.

--Send from my Sony mobile.
On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:

> Thanks Harsh for the reply. I was confused too that why security is
> causing this.
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Shahab - I see he has mentioned generally that security is enabled
>> (but not that it happens iff security is enabled), and the issue here
>> doesn't have anything to do with security really.
>>
>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> on the mapreduce-dev lists.
>>
>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > HI Harsh,
>> >
>> > Quick question though: why do you think it only happens if the OP 'uses
>> > security' as he mentioned?
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> >>
>> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> >> or 8 exbibytes.
>> >>
>> >> Looking at the sources, this turns out to be a rather funny Java issue
>> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> >> return in such a case). I've logged a bug report for this at
>> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> >> reproducible case.
>> >>
>> >> Does this happen consistently for you?
>> >>
>> >> [1]
>> >>
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> >>
>> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> >> wrote:
>> >> > Hi all,
>> >> >
>> >> > I stumbled upon this problem as well while trying to run the default
>> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> virtual
>> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> node is
>> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> file is
>> >> > about 600 kB and the error is
>> >> >
>> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> but we
>> >> > expect map to take 9223372036854775807
>> >> >
>> >> > The logfile is attached, together with the configuration files. The
>> >> > version I'm using is
>> >> >
>> >> > Hadoop 1.2.0
>> >> > Subversion
>> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
>> >> > 1479473
>> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> >> > This command was run using
>> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >> >
>> >> > If I run the default configuration (i.e. no securty), then the job
>> >> > succeeds.
>> >> >
>> >> > Is there something missing in how I set up my nodes? How is it
>> possible
>> >> > that the envisaged value for the needed space is so big?
>> >> >
>> >> > Thanks in advance.
>> >> >
>> >> > Matteo
>> >> >
>> >> >
>> >> >
>> >> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >> >>similar symptoms. However, that was fixed a long while ago.
>> >> >>
>> >> >>
>> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >> >>reduno1985@googlemail.com> wrote:
>> >> >>
>> >> >>> This the content of the jobtracker log file :
>> >> >>> 2013-03-23 12:06:48,912 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Input
>> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >> >>> 2013-03-23 12:06:48,925 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000000 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,927 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000001 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,930 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000002 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,931 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000003 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,933 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000004 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,934 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000005 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,939 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000006 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,950 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >> >>> 2013-03-23 12:06:48,978 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Job
>> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> and 1
>> >> >>> reduce tasks.
>> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >> >>> Adding
>> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >> >>> task_201303231139_0001_m_000008, for tracker
>> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >> >>> 2013-03-23 12:08:00,340 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Task
>> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >> >>> task_201303231139_0001_m_000008 successfully.
>> >> >>> 2013-03-23 12:08:00,538 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,543 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:01,264 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>>
>> >> >>>
>> >> >>> The value in we excpect map to take is too huge
>> 1317624576693539401
>> >> >>> bytes  !!!!!!!
>> >> >>>
>> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >> >>> reduno1985@googlemail.com> wrote:
>> >> >>>
>> >> >>>> The estimated value that the hadoop compute is too huge for the
>> >> >>>> simple
>> >> >>>> example that i am running .
>> >> >>>>
>> >> >>>> ---------- Forwarded message ----------
>> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >> >>>>
>> >> >>>>
>> >> >>>> This the output that I get I am running two machines  as you can
>> see
>> >> >>>> do
>> >> >>>> u see anything suspicious ?
>> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >> >>>> DFS Used: 57344 (56 KB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> Under replicated blocks: 0
>> >> >>>> Blocks with corrupt replicas: 0
>> >> >>>> Missing blocks: 0
>> >> >>>>
>> >> >>>> -------------------------------------------------
>> >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >> >>>>
>> >> >>>> Name: 11.1.0.6:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.31%
>> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> Name: 11.1.0.3:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.3%
>> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >> >>>> ashettia@hortonworks.com> wrote:
>> >> >>>>
>> >> >>>>> Hi Redwane,
>> >> >>>>>
>> >> >>>>> Please run the following command as hdfs user on any datanode.
>> The
>> >> >>>>> output will be something like this. Hope this helps
>> >> >>>>>
>> >> >>>>> hadoop dfsadmin -report
>> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >> >>>>> DFS Used: 480129024 (457.89 MB)
>> >> >>>>> DFS Used%: 0.68%
>> >> >>>>> Under replicated blocks: 0
>> >> >>>>> Blocks with corrupt replicas: 0
>> >> >>>>> Missing blocks: 0
>> >> >>>>>
>> >> >>>>> Thanks
>> >> >>>>> -Abdelrahman
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >> >>>>> <re...@googlemail.com>wrote:
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> I have my hosts running on openstack virtual machine instances
>> each
>> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space
>> >> >>>>>> is in
>> >> >>>>>> the hdfs without web ui .
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> Sent from Samsung Mobile
>> >> >>>>>>
>> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >> >>>>>> Check web ui how much space you have on hdfs???
>> >> >>>>>>
>> >> >>>>>> Sent from my iPhone
>> >> >>>>>>
>> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >> >>>>>> ashettia@hortonworks.com> wrote:
>> >> >>>>>>
>> >> >>>>>> Hi Redwane ,
>> >> >>>>>>
>> >> >>>>>> It is possible that the hosts which are running tasks are do not
>> >> >>>>>> have
>> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >> >>>>>> reduno1985@googlemail.com> wrote:
>> >> >>>>>>
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> ---------- Forwarded message ----------
>> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> Hi
>> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >> >>>>>>> (<20
>> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >> >>>>>>> The jobtracker log file shows the following warning:
>> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >> >>>>>>> task.
>> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >> >>>>>>> map to
>> >> >>take
>> >> >>>>>>> 1317624576693539401
>> >> >>>>>>>
>> >> >>>>>>> Please help me ,
>> >> >>>>>>> Best Regards,
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>
>> >> >>>>>
>> >> >>>>
>> >> >>>>
>> >> >>>
>> >> >
>> >> >
>> >> > Matteo Lanati
>> >> > Distributed Resources Group
>> >> > Leibniz-Rechenzentrum (LRZ)
>> >> > Boltzmannstrasse 1
>> >> > 85748 Garching b. München (Germany)
>> >> > Phone: +49 89 35831 8724
>> >>
>> >>
>> >>
>> >> --
>> >> Harsh J
>> >
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
can you upgrade to 1.1.2, which is also a stable release, and fixed the bug
you facing now.

--Send from my Sony mobile.
On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:

> Thanks Harsh for the reply. I was confused too that why security is
> causing this.
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Shahab - I see he has mentioned generally that security is enabled
>> (but not that it happens iff security is enabled), and the issue here
>> doesn't have anything to do with security really.
>>
>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> on the mapreduce-dev lists.
>>
>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > HI Harsh,
>> >
>> > Quick question though: why do you think it only happens if the OP 'uses
>> > security' as he mentioned?
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> >>
>> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> >> or 8 exbibytes.
>> >>
>> >> Looking at the sources, this turns out to be a rather funny Java issue
>> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> >> return in such a case). I've logged a bug report for this at
>> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> >> reproducible case.
>> >>
>> >> Does this happen consistently for you?
>> >>
>> >> [1]
>> >>
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> >>
>> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> >> wrote:
>> >> > Hi all,
>> >> >
>> >> > I stumbled upon this problem as well while trying to run the default
>> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> virtual
>> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> node is
>> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> file is
>> >> > about 600 kB and the error is
>> >> >
>> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> but we
>> >> > expect map to take 9223372036854775807
>> >> >
>> >> > The logfile is attached, together with the configuration files. The
>> >> > version I'm using is
>> >> >
>> >> > Hadoop 1.2.0
>> >> > Subversion
>> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
>> >> > 1479473
>> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> >> > This command was run using
>> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >> >
>> >> > If I run the default configuration (i.e. no securty), then the job
>> >> > succeeds.
>> >> >
>> >> > Is there something missing in how I set up my nodes? How is it
>> possible
>> >> > that the envisaged value for the needed space is so big?
>> >> >
>> >> > Thanks in advance.
>> >> >
>> >> > Matteo
>> >> >
>> >> >
>> >> >
>> >> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >> >>similar symptoms. However, that was fixed a long while ago.
>> >> >>
>> >> >>
>> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >> >>reduno1985@googlemail.com> wrote:
>> >> >>
>> >> >>> This the content of the jobtracker log file :
>> >> >>> 2013-03-23 12:06:48,912 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Input
>> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >> >>> 2013-03-23 12:06:48,925 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000000 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,927 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000001 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,930 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000002 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,931 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000003 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,933 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000004 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,934 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000005 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,939 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000006 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,950 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >> >>> 2013-03-23 12:06:48,978 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Job
>> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> and 1
>> >> >>> reduce tasks.
>> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >> >>> Adding
>> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >> >>> task_201303231139_0001_m_000008, for tracker
>> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >> >>> 2013-03-23 12:08:00,340 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Task
>> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >> >>> task_201303231139_0001_m_000008 successfully.
>> >> >>> 2013-03-23 12:08:00,538 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,543 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:01,264 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>>
>> >> >>>
>> >> >>> The value in we excpect map to take is too huge
>> 1317624576693539401
>> >> >>> bytes  !!!!!!!
>> >> >>>
>> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >> >>> reduno1985@googlemail.com> wrote:
>> >> >>>
>> >> >>>> The estimated value that the hadoop compute is too huge for the
>> >> >>>> simple
>> >> >>>> example that i am running .
>> >> >>>>
>> >> >>>> ---------- Forwarded message ----------
>> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >> >>>>
>> >> >>>>
>> >> >>>> This the output that I get I am running two machines  as you can
>> see
>> >> >>>> do
>> >> >>>> u see anything suspicious ?
>> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >> >>>> DFS Used: 57344 (56 KB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> Under replicated blocks: 0
>> >> >>>> Blocks with corrupt replicas: 0
>> >> >>>> Missing blocks: 0
>> >> >>>>
>> >> >>>> -------------------------------------------------
>> >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >> >>>>
>> >> >>>> Name: 11.1.0.6:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.31%
>> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> Name: 11.1.0.3:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.3%
>> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >> >>>> ashettia@hortonworks.com> wrote:
>> >> >>>>
>> >> >>>>> Hi Redwane,
>> >> >>>>>
>> >> >>>>> Please run the following command as hdfs user on any datanode.
>> The
>> >> >>>>> output will be something like this. Hope this helps
>> >> >>>>>
>> >> >>>>> hadoop dfsadmin -report
>> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >> >>>>> DFS Used: 480129024 (457.89 MB)
>> >> >>>>> DFS Used%: 0.68%
>> >> >>>>> Under replicated blocks: 0
>> >> >>>>> Blocks with corrupt replicas: 0
>> >> >>>>> Missing blocks: 0
>> >> >>>>>
>> >> >>>>> Thanks
>> >> >>>>> -Abdelrahman
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >> >>>>> <re...@googlemail.com>wrote:
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> I have my hosts running on openstack virtual machine instances
>> each
>> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space
>> >> >>>>>> is in
>> >> >>>>>> the hdfs without web ui .
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> Sent from Samsung Mobile
>> >> >>>>>>
>> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >> >>>>>> Check web ui how much space you have on hdfs???
>> >> >>>>>>
>> >> >>>>>> Sent from my iPhone
>> >> >>>>>>
>> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >> >>>>>> ashettia@hortonworks.com> wrote:
>> >> >>>>>>
>> >> >>>>>> Hi Redwane ,
>> >> >>>>>>
>> >> >>>>>> It is possible that the hosts which are running tasks are do not
>> >> >>>>>> have
>> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >> >>>>>> reduno1985@googlemail.com> wrote:
>> >> >>>>>>
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> ---------- Forwarded message ----------
>> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> Hi
>> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >> >>>>>>> (<20
>> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >> >>>>>>> The jobtracker log file shows the following warning:
>> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >> >>>>>>> task.
>> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >> >>>>>>> map to
>> >> >>take
>> >> >>>>>>> 1317624576693539401
>> >> >>>>>>>
>> >> >>>>>>> Please help me ,
>> >> >>>>>>> Best Regards,
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>
>> >> >>>>>
>> >> >>>>
>> >> >>>>
>> >> >>>
>> >> >
>> >> >
>> >> > Matteo Lanati
>> >> > Distributed Resources Group
>> >> > Leibniz-Rechenzentrum (LRZ)
>> >> > Boltzmannstrasse 1
>> >> > 85748 Garching b. München (Germany)
>> >> > Phone: +49 89 35831 8724
>> >>
>> >>
>> >>
>> >> --
>> >> Harsh J
>> >
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
can you upgrade to 1.1.2, which is also a stable release, and fixed the bug
you facing now.

--Send from my Sony mobile.
On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:

> Thanks Harsh for the reply. I was confused too that why security is
> causing this.
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Shahab - I see he has mentioned generally that security is enabled
>> (but not that it happens iff security is enabled), and the issue here
>> doesn't have anything to do with security really.
>>
>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> on the mapreduce-dev lists.
>>
>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > HI Harsh,
>> >
>> > Quick question though: why do you think it only happens if the OP 'uses
>> > security' as he mentioned?
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> >>
>> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> >> or 8 exbibytes.
>> >>
>> >> Looking at the sources, this turns out to be a rather funny Java issue
>> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> >> return in such a case). I've logged a bug report for this at
>> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> >> reproducible case.
>> >>
>> >> Does this happen consistently for you?
>> >>
>> >> [1]
>> >>
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> >>
>> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> >> wrote:
>> >> > Hi all,
>> >> >
>> >> > I stumbled upon this problem as well while trying to run the default
>> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> virtual
>> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> node is
>> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> file is
>> >> > about 600 kB and the error is
>> >> >
>> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> but we
>> >> > expect map to take 9223372036854775807
>> >> >
>> >> > The logfile is attached, together with the configuration files. The
>> >> > version I'm using is
>> >> >
>> >> > Hadoop 1.2.0
>> >> > Subversion
>> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
>> >> > 1479473
>> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> >> > This command was run using
>> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >> >
>> >> > If I run the default configuration (i.e. no securty), then the job
>> >> > succeeds.
>> >> >
>> >> > Is there something missing in how I set up my nodes? How is it
>> possible
>> >> > that the envisaged value for the needed space is so big?
>> >> >
>> >> > Thanks in advance.
>> >> >
>> >> > Matteo
>> >> >
>> >> >
>> >> >
>> >> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >> >>similar symptoms. However, that was fixed a long while ago.
>> >> >>
>> >> >>
>> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >> >>reduno1985@googlemail.com> wrote:
>> >> >>
>> >> >>> This the content of the jobtracker log file :
>> >> >>> 2013-03-23 12:06:48,912 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Input
>> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >> >>> 2013-03-23 12:06:48,925 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000000 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,927 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000001 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,930 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000002 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,931 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000003 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,933 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000004 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,934 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000005 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,939 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000006 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,950 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >> >>> 2013-03-23 12:06:48,978 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Job
>> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> and 1
>> >> >>> reduce tasks.
>> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >> >>> Adding
>> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >> >>> task_201303231139_0001_m_000008, for tracker
>> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >> >>> 2013-03-23 12:08:00,340 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Task
>> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >> >>> task_201303231139_0001_m_000008 successfully.
>> >> >>> 2013-03-23 12:08:00,538 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,543 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:01,264 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>>
>> >> >>>
>> >> >>> The value in we excpect map to take is too huge
>> 1317624576693539401
>> >> >>> bytes  !!!!!!!
>> >> >>>
>> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >> >>> reduno1985@googlemail.com> wrote:
>> >> >>>
>> >> >>>> The estimated value that the hadoop compute is too huge for the
>> >> >>>> simple
>> >> >>>> example that i am running .
>> >> >>>>
>> >> >>>> ---------- Forwarded message ----------
>> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >> >>>>
>> >> >>>>
>> >> >>>> This the output that I get I am running two machines  as you can
>> see
>> >> >>>> do
>> >> >>>> u see anything suspicious ?
>> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >> >>>> DFS Used: 57344 (56 KB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> Under replicated blocks: 0
>> >> >>>> Blocks with corrupt replicas: 0
>> >> >>>> Missing blocks: 0
>> >> >>>>
>> >> >>>> -------------------------------------------------
>> >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >> >>>>
>> >> >>>> Name: 11.1.0.6:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.31%
>> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> Name: 11.1.0.3:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.3%
>> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >> >>>> ashettia@hortonworks.com> wrote:
>> >> >>>>
>> >> >>>>> Hi Redwane,
>> >> >>>>>
>> >> >>>>> Please run the following command as hdfs user on any datanode.
>> The
>> >> >>>>> output will be something like this. Hope this helps
>> >> >>>>>
>> >> >>>>> hadoop dfsadmin -report
>> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >> >>>>> DFS Used: 480129024 (457.89 MB)
>> >> >>>>> DFS Used%: 0.68%
>> >> >>>>> Under replicated blocks: 0
>> >> >>>>> Blocks with corrupt replicas: 0
>> >> >>>>> Missing blocks: 0
>> >> >>>>>
>> >> >>>>> Thanks
>> >> >>>>> -Abdelrahman
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >> >>>>> <re...@googlemail.com>wrote:
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> I have my hosts running on openstack virtual machine instances
>> each
>> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space
>> >> >>>>>> is in
>> >> >>>>>> the hdfs without web ui .
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> Sent from Samsung Mobile
>> >> >>>>>>
>> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >> >>>>>> Check web ui how much space you have on hdfs???
>> >> >>>>>>
>> >> >>>>>> Sent from my iPhone
>> >> >>>>>>
>> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >> >>>>>> ashettia@hortonworks.com> wrote:
>> >> >>>>>>
>> >> >>>>>> Hi Redwane ,
>> >> >>>>>>
>> >> >>>>>> It is possible that the hosts which are running tasks are do not
>> >> >>>>>> have
>> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >> >>>>>> reduno1985@googlemail.com> wrote:
>> >> >>>>>>
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> ---------- Forwarded message ----------
>> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> Hi
>> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >> >>>>>>> (<20
>> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >> >>>>>>> The jobtracker log file shows the following warning:
>> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >> >>>>>>> task.
>> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >> >>>>>>> map to
>> >> >>take
>> >> >>>>>>> 1317624576693539401
>> >> >>>>>>>
>> >> >>>>>>> Please help me ,
>> >> >>>>>>> Best Regards,
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>
>> >> >>>>>
>> >> >>>>
>> >> >>>>
>> >> >>>
>> >> >
>> >> >
>> >> > Matteo Lanati
>> >> > Distributed Resources Group
>> >> > Leibniz-Rechenzentrum (LRZ)
>> >> > Boltzmannstrasse 1
>> >> > 85748 Garching b. München (Germany)
>> >> > Phone: +49 89 35831 8724
>> >>
>> >>
>> >>
>> >> --
>> >> Harsh J
>> >
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
can you upgrade to 1.1.2, which is also a stable release, and fixed the bug
you facing now.

--Send from my Sony mobile.
On Jun 2, 2013 3:23 AM, "Shahab Yunus" <sh...@gmail.com> wrote:

> Thanks Harsh for the reply. I was confused too that why security is
> causing this.
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:
>
>> Shahab - I see he has mentioned generally that security is enabled
>> (but not that it happens iff security is enabled), and the issue here
>> doesn't have anything to do with security really.
>>
>> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
>> on the mapreduce-dev lists.
>>
>> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
>> wrote:
>> > HI Harsh,
>> >
>> > Quick question though: why do you think it only happens if the OP 'uses
>> > security' as he mentioned?
>> >
>> > Regards,
>> > Shahab
>> >
>> >
>> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>> >>
>> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> >> or 8 exbibytes.
>> >>
>> >> Looking at the sources, this turns out to be a rather funny Java issue
>> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> >> return in such a case). I've logged a bug report for this at
>> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> >> reproducible case.
>> >>
>> >> Does this happen consistently for you?
>> >>
>> >> [1]
>> >>
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>> >>
>> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> >> wrote:
>> >> > Hi all,
>> >> >
>> >> > I stumbled upon this problem as well while trying to run the default
>> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
>> virtual
>> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
>> node is
>> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
>> file is
>> >> > about 600 kB and the error is
>> >> >
>> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
>> No
>> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free;
>> but we
>> >> > expect map to take 9223372036854775807
>> >> >
>> >> > The logfile is attached, together with the configuration files. The
>> >> > version I'm using is
>> >> >
>> >> > Hadoop 1.2.0
>> >> > Subversion
>> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2-r
>> >> > 1479473
>> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> >> > This command was run using
>> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >> >
>> >> > If I run the default configuration (i.e. no securty), then the job
>> >> > succeeds.
>> >> >
>> >> > Is there something missing in how I set up my nodes? How is it
>> possible
>> >> > that the envisaged value for the needed space is so big?
>> >> >
>> >> > Thanks in advance.
>> >> >
>> >> > Matteo
>> >> >
>> >> >
>> >> >
>> >> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >> >>similar symptoms. However, that was fixed a long while ago.
>> >> >>
>> >> >>
>> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >> >>reduno1985@googlemail.com> wrote:
>> >> >>
>> >> >>> This the content of the jobtracker log file :
>> >> >>> 2013-03-23 12:06:48,912 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Input
>> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >> >>> 2013-03-23 12:06:48,925 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000000 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,927 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000001 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,930 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000002 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,931 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000003 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,933 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000004 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,934 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000005 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,939 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> tip:task_201303231139_0001_m_000006 has split on
>> >> >>> node:/default-rack/hadoop0.novalocal
>> >> >>> 2013-03-23 12:06:48,950 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >> >>> 2013-03-23 12:06:48,978 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Job
>> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks
>> and 1
>> >> >>> reduce tasks.
>> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >> >>> Adding
>> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >> >>> task_201303231139_0001_m_000008, for tracker
>> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >> >>> 2013-03-23 12:08:00,340 INFO
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> Task
>> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >> >>> task_201303231139_0001_m_000008 successfully.
>> >> >>> 2013-03-23 12:08:00,538 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,543 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:00,544 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>> 2013-03-23 12:08:01,264 WARN
>> org.apache.hadoop.mapred.JobInProgress:
>> >> >>> No
>> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes
>> free;
>> >> >>> but we
>> >> >>> expect map to take 1317624576693539401
>> >> >>>
>> >> >>>
>> >> >>> The value in we excpect map to take is too huge
>> 1317624576693539401
>> >> >>> bytes  !!!!!!!
>> >> >>>
>> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >> >>> reduno1985@googlemail.com> wrote:
>> >> >>>
>> >> >>>> The estimated value that the hadoop compute is too huge for the
>> >> >>>> simple
>> >> >>>> example that i am running .
>> >> >>>>
>> >> >>>> ---------- Forwarded message ----------
>> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >> >>>>
>> >> >>>>
>> >> >>>> This the output that I get I am running two machines  as you can
>> see
>> >> >>>> do
>> >> >>>> u see anything suspicious ?
>> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >> >>>> DFS Used: 57344 (56 KB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> Under replicated blocks: 0
>> >> >>>> Blocks with corrupt replicas: 0
>> >> >>>> Missing blocks: 0
>> >> >>>>
>> >> >>>> -------------------------------------------------
>> >> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >> >>>>
>> >> >>>> Name: 11.1.0.6:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.31%
>> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> Name: 11.1.0.3:50010
>> >> >>>> Decommission Status : Normal
>> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >> >>>> DFS Used: 28672 (28 KB)
>> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >> >>>> DFS Used%: 0%
>> >> >>>> DFS Remaining%: 83.3%
>> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >> >>>>
>> >> >>>>
>> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >> >>>> ashettia@hortonworks.com> wrote:
>> >> >>>>
>> >> >>>>> Hi Redwane,
>> >> >>>>>
>> >> >>>>> Please run the following command as hdfs user on any datanode.
>> The
>> >> >>>>> output will be something like this. Hope this helps
>> >> >>>>>
>> >> >>>>> hadoop dfsadmin -report
>> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >> >>>>> DFS Used: 480129024 (457.89 MB)
>> >> >>>>> DFS Used%: 0.68%
>> >> >>>>> Under replicated blocks: 0
>> >> >>>>> Blocks with corrupt replicas: 0
>> >> >>>>> Missing blocks: 0
>> >> >>>>>
>> >> >>>>> Thanks
>> >> >>>>> -Abdelrahman
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >> >>>>> <re...@googlemail.com>wrote:
>> >> >>>>>
>> >> >>>>>>
>> >> >>>>>> I have my hosts running on openstack virtual machine instances
>> each
>> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
>> space
>> >> >>>>>> is in
>> >> >>>>>> the hdfs without web ui .
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> Sent from Samsung Mobile
>> >> >>>>>>
>> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >> >>>>>> Check web ui how much space you have on hdfs???
>> >> >>>>>>
>> >> >>>>>> Sent from my iPhone
>> >> >>>>>>
>> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >> >>>>>> ashettia@hortonworks.com> wrote:
>> >> >>>>>>
>> >> >>>>>> Hi Redwane ,
>> >> >>>>>>
>> >> >>>>>> It is possible that the hosts which are running tasks are do not
>> >> >>>>>> have
>> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >> >>>>>> reduno1985@googlemail.com> wrote:
>> >> >>>>>>
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> ---------- Forwarded message ----------
>> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>> Hi
>> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >> >>>>>>> (<20
>> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >> >>>>>>> The jobtracker log file shows the following warning:
>> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >> >>>>>>> task.
>> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >> >>>>>>> map to
>> >> >>take
>> >> >>>>>>> 1317624576693539401
>> >> >>>>>>>
>> >> >>>>>>> Please help me ,
>> >> >>>>>>> Best Regards,
>> >> >>>>>>>
>> >> >>>>>>>
>> >> >>>>>>
>> >> >>>>>
>> >> >>>>
>> >> >>>>
>> >> >>>
>> >> >
>> >> >
>> >> > Matteo Lanati
>> >> > Distributed Resources Group
>> >> > Leibniz-Rechenzentrum (LRZ)
>> >> > Boltzmannstrasse 1
>> >> > 85748 Garching b. München (Germany)
>> >> > Phone: +49 89 35831 8724
>> >>
>> >>
>> >>
>> >> --
>> >> Harsh J
>> >
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
Thanks Harsh for the reply. I was confused too that why security is causing
this.

Regards,
Shahab


On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:

> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
>
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
>
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
> No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but
> we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it
> possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and
> 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can
> see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
Thanks Harsh for the reply. I was confused too that why security is causing
this.

Regards,
Shahab


On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:

> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
>
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
>
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
> No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but
> we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it
> possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and
> 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can
> see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
Thanks Harsh for the reply. I was confused too that why security is causing
this.

Regards,
Shahab


On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:

> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
>
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
>
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
> No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but
> we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it
> possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and
> 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can
> see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
Thanks Harsh for the reply. I was confused too that why security is causing
this.

Regards,
Shahab


On Sat, Jun 1, 2013 at 12:43 PM, Harsh J <ha...@cloudera.com> wrote:

> Shahab - I see he has mentioned generally that security is enabled
> (but not that it happens iff security is enabled), and the issue here
> doesn't have anything to do with security really.
>
> Azurry - Lets discuss the code issues on the JIRA (instead of here) or
> on the mapreduce-dev lists.
>
> On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com>
> wrote:
> > HI Harsh,
> >
> > Quick question though: why do you think it only happens if the OP 'uses
> > security' as he mentioned?
> >
> > Regards,
> > Shahab
> >
> >
> > On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> >> or 8 exbibytes.
> >>
> >> Looking at the sources, this turns out to be a rather funny Java issue
> >> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> >> return in such a case). I've logged a bug report for this at
> >> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> >> reproducible case.
> >>
> >> Does this happen consistently for you?
> >>
> >> [1]
> >>
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
> >>
> >> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> >> wrote:
> >> > Hi all,
> >> >
> >> > I stumbled upon this problem as well while trying to run the default
> >> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2
> virtual
> >> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One
> node is
> >> > used as JT+NN, the other as TT+DN. Security is enabled. The input
> file is
> >> > about 600 kB and the error is
> >> >
> >> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress:
> No
> >> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but
> we
> >> > expect map to take 9223372036854775807
> >> >
> >> > The logfile is attached, together with the configuration files. The
> >> > version I'm using is
> >> >
> >> > Hadoop 1.2.0
> >> > Subversion
> >> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> >> > 1479473
> >> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> >> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> >> > This command was run using
> >> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >> >
> >> > If I run the default configuration (i.e. no securty), then the job
> >> > succeeds.
> >> >
> >> > Is there something missing in how I set up my nodes? How is it
> possible
> >> > that the envisaged value for the needed space is so big?
> >> >
> >> > Thanks in advance.
> >> >
> >> > Matteo
> >> >
> >> >
> >> >
> >> >>Which version of Hadoop are you using. A quick search shows me a bug
> >> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >> >>similar symptoms. However, that was fixed a long while ago.
> >> >>
> >> >>
> >> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >> >>reduno1985@googlemail.com> wrote:
> >> >>
> >> >>> This the content of the jobtracker log file :
> >> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Input
> >> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000000 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000001 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000002 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000003 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000004 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000005 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> tip:task_201303231139_0001_m_000006 has split on
> >> >>> node:/default-rack/hadoop0.novalocal
> >> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Job
> >> >>> job_201303231139_0001 initialized successfully with 7 map tasks and
> 1
> >> >>> reduce tasks.
> >> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> >> >>> Adding
> >> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> >>> task_201303231139_0001_m_000008, for tracker
> >> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> >> >>> Task
> >> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >> >>> task_201303231139_0001_m_000008 successfully.
> >> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
> >> >>> No
> >> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> >> >>> but we
> >> >>> expect map to take 1317624576693539401
> >> >>>
> >> >>>
> >> >>> The value in we excpect map to take is too huge
> 1317624576693539401
> >> >>> bytes  !!!!!!!
> >> >>>
> >> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> >>> reduno1985@googlemail.com> wrote:
> >> >>>
> >> >>>> The estimated value that the hadoop compute is too huge for the
> >> >>>> simple
> >> >>>> example that i am running .
> >> >>>>
> >> >>>> ---------- Forwarded message ----------
> >> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >> >>>> Subject: Re: About running a simple wordcount mapreduce
> >> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >> >>>>
> >> >>>>
> >> >>>> This the output that I get I am running two machines  as you can
> see
> >> >>>> do
> >> >>>> u see anything suspicious ?
> >> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >> >>>> Present Capacity: 17615499264 (16.41 GB)
> >> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >> >>>> DFS Used: 57344 (56 KB)
> >> >>>> DFS Used%: 0%
> >> >>>> Under replicated blocks: 0
> >> >>>> Blocks with corrupt replicas: 0
> >> >>>> Missing blocks: 0
> >> >>>>
> >> >>>> -------------------------------------------------
> >> >>>> Datanodes available: 2 (2 total, 0 dead)
> >> >>>>
> >> >>>> Name: 11.1.0.6:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >> >>>> DFS Remaining: 8807800832(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.31%
> >> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> Name: 11.1.0.3:50010
> >> >>>> Decommission Status : Normal
> >> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >> >>>> DFS Used: 28672 (28 KB)
> >> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >> >>>> DFS Remaining: 8807641088(8.2 GB)
> >> >>>> DFS Used%: 0%
> >> >>>> DFS Remaining%: 83.3%
> >> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >> >>>>
> >> >>>>
> >> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >> >>>> ashettia@hortonworks.com> wrote:
> >> >>>>
> >> >>>>> Hi Redwane,
> >> >>>>>
> >> >>>>> Please run the following command as hdfs user on any datanode. The
> >> >>>>> output will be something like this. Hope this helps
> >> >>>>>
> >> >>>>> hadoop dfsadmin -report
> >> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >> >>>>> DFS Used: 480129024 (457.89 MB)
> >> >>>>> DFS Used%: 0.68%
> >> >>>>> Under replicated blocks: 0
> >> >>>>> Blocks with corrupt replicas: 0
> >> >>>>> Missing blocks: 0
> >> >>>>>
> >> >>>>> Thanks
> >> >>>>> -Abdelrahman
> >> >>>>>
> >> >>>>>
> >> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
> >> >>>>> <re...@googlemail.com>wrote:
> >> >>>>>
> >> >>>>>>
> >> >>>>>> I have my hosts running on openstack virtual machine instances
> each
> >> >>>>>> instance has 10gb hard disc . Is there a way too see how much
> space
> >> >>>>>> is in
> >> >>>>>> the hdfs without web ui .
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> Sent from Samsung Mobile
> >> >>>>>>
> >> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >> >>>>>> Check web ui how much space you have on hdfs???
> >> >>>>>>
> >> >>>>>> Sent from my iPhone
> >> >>>>>>
> >> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >> >>>>>> ashettia@hortonworks.com> wrote:
> >> >>>>>>
> >> >>>>>> Hi Redwane ,
> >> >>>>>>
> >> >>>>>> It is possible that the hosts which are running tasks are do not
> >> >>>>>> have
> >> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >> >>>>>>
> >> >>>>>>
> >> >>>>>>
> >> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >> >>>>>> reduno1985@googlemail.com> wrote:
> >> >>>>>>
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> ---------- Forwarded message ----------
> >> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >> >>>>>>> Subject: About running a simple wordcount mapreduce
> >> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
> >> >>>>>>> (<20
> >> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >> >>>>>>> The jobtracker log file shows the following warning:
> >> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
> >> >>>>>>> task.
> >> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> >> >>>>>>> map to
> >> >>take
> >> >>>>>>> 1317624576693539401
> >> >>>>>>>
> >> >>>>>>> Please help me ,
> >> >>>>>>> Best Regards,
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>
> >> >>>>
> >> >>>
> >> >
> >> >
> >> > Matteo Lanati
> >> > Distributed Resources Group
> >> > Leibniz-Rechenzentrum (LRZ)
> >> > Boltzmannstrasse 1
> >> > 85748 Garching b. München (Germany)
> >> > Phone: +49 89 35831 8724
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
>
>
>
> --
> Harsh J
>

Re:

Posted by Harsh J <ha...@cloudera.com>.
Shahab - I see he has mentioned generally that security is enabled
(but not that it happens iff security is enabled), and the issue here
doesn't have anything to do with security really.

Azurry - Lets discuss the code issues on the JIRA (instead of here) or
on the mapreduce-dev lists.

On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> HI Harsh,
>
> Quick question though: why do you think it only happens if the OP 'uses
> security' as he mentioned?
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> > about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> > expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> > version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> > 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> > succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> > that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> >>>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>> >>>> do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >>>>> <re...@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
>> >>>>>> is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> >>>>>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >>>>>>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >>>>>>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >>>>>>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re:

Posted by Harsh J <ha...@cloudera.com>.
Shahab - I see he has mentioned generally that security is enabled
(but not that it happens iff security is enabled), and the issue here
doesn't have anything to do with security really.

Azurry - Lets discuss the code issues on the JIRA (instead of here) or
on the mapreduce-dev lists.

On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> HI Harsh,
>
> Quick question though: why do you think it only happens if the OP 'uses
> security' as he mentioned?
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> > about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> > expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> > version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> > 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> > succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> > that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> >>>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>> >>>> do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >>>>> <re...@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
>> >>>>>> is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> >>>>>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >>>>>>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >>>>>>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >>>>>>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re:

Posted by Harsh J <ha...@cloudera.com>.
Shahab - I see he has mentioned generally that security is enabled
(but not that it happens iff security is enabled), and the issue here
doesn't have anything to do with security really.

Azurry - Lets discuss the code issues on the JIRA (instead of here) or
on the mapreduce-dev lists.

On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> HI Harsh,
>
> Quick question though: why do you think it only happens if the OP 'uses
> security' as he mentioned?
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> > about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> > expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> > version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> > 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> > succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> > that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> >>>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>> >>>> do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >>>>> <re...@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
>> >>>>>> is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> >>>>>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >>>>>>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >>>>>>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >>>>>>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re:

Posted by Harsh J <ha...@cloudera.com>.
Shahab - I see he has mentioned generally that security is enabled
(but not that it happens iff security is enabled), and the issue here
doesn't have anything to do with security really.

Azurry - Lets discuss the code issues on the JIRA (instead of here) or
on the mapreduce-dev lists.

On Sat, Jun 1, 2013 at 10:05 PM, Shahab Yunus <sh...@gmail.com> wrote:
> HI Harsh,
>
> Quick question though: why do you think it only happens if the OP 'uses
> security' as he mentioned?
>
> Regards,
> Shahab
>
>
> On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
>> or 8 exbibytes.
>>
>> Looking at the sources, this turns out to be a rather funny Java issue
>> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
>> return in such a case). I've logged a bug report for this at
>> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
>> reproducible case.
>>
>> Does this happen consistently for you?
>>
>> [1]
>> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>>
>> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
>> wrote:
>> > Hi all,
>> >
>> > I stumbled upon this problem as well while trying to run the default
>> > wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
>> > machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
>> > used as JT+NN, the other as TT+DN. Security is enabled. The input file is
>> > about 600 kB and the error is
>> >
>> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
>> > room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
>> > expect map to take 9223372036854775807
>> >
>> > The logfile is attached, together with the configuration files. The
>> > version I'm using is
>> >
>> > Hadoop 1.2.0
>> > Subversion
>> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
>> > 1479473
>> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
>> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
>> > This command was run using
>> > /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>> >
>> > If I run the default configuration (i.e. no securty), then the job
>> > succeeds.
>> >
>> > Is there something missing in how I set up my nodes? How is it possible
>> > that the envisaged value for the needed space is so big?
>> >
>> > Thanks in advance.
>> >
>> > Matteo
>> >
>> >
>> >
>> >>Which version of Hadoop are you using. A quick search shows me a bug
>> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>> >>similar symptoms. However, that was fixed a long while ago.
>> >>
>> >>
>> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>> >>reduno1985@googlemail.com> wrote:
>> >>
>> >>> This the content of the jobtracker log file :
>> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Input
>> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000000 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000001 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000002 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000003 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000004 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000005 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> tip:task_201303231139_0001_m_000006 has split on
>> >>> node:/default-rack/hadoop0.novalocal
>> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Job
>> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>> >>> reduce tasks.
>> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
>> >>> Adding
>> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>> >>> task_201303231139_0001_m_000008, for tracker
>> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
>> >>> Task
>> >>> 'attempt_201303231139_0001_m_000008_0' has completed
>> >>> task_201303231139_0001_m_000008 successfully.
>> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress:
>> >>> No
>> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
>> >>> but we
>> >>> expect map to take 1317624576693539401
>> >>>
>> >>>
>> >>> The value in we excpect map to take is too huge   1317624576693539401
>> >>> bytes  !!!!!!!
>> >>>
>> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>> >>> reduno1985@googlemail.com> wrote:
>> >>>
>> >>>> The estimated value that the hadoop compute is too huge for the
>> >>>> simple
>> >>>> example that i am running .
>> >>>>
>> >>>> ---------- Forwarded message ----------
>> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>> >>>> Subject: Re: About running a simple wordcount mapreduce
>> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>> >>>>
>> >>>>
>> >>>> This the output that I get I am running two machines  as you can see
>> >>>> do
>> >>>> u see anything suspicious ?
>> >>>> Configured Capacity: 21145698304 (19.69 GB)
>> >>>> Present Capacity: 17615499264 (16.41 GB)
>> >>>> DFS Remaining: 17615441920 (16.41 GB)
>> >>>> DFS Used: 57344 (56 KB)
>> >>>> DFS Used%: 0%
>> >>>> Under replicated blocks: 0
>> >>>> Blocks with corrupt replicas: 0
>> >>>> Missing blocks: 0
>> >>>>
>> >>>> -------------------------------------------------
>> >>>> Datanodes available: 2 (2 total, 0 dead)
>> >>>>
>> >>>> Name: 11.1.0.6:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765019648 (1.64 GB)
>> >>>> DFS Remaining: 8807800832(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.31%
>> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>> >>>>
>> >>>>
>> >>>> Name: 11.1.0.3:50010
>> >>>> Decommission Status : Normal
>> >>>> Configured Capacity: 10572849152 (9.85 GB)
>> >>>> DFS Used: 28672 (28 KB)
>> >>>> Non DFS Used: 1765179392 (1.64 GB)
>> >>>> DFS Remaining: 8807641088(8.2 GB)
>> >>>> DFS Used%: 0%
>> >>>> DFS Remaining%: 83.3%
>> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>> >>>>
>> >>>>
>> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>> >>>> ashettia@hortonworks.com> wrote:
>> >>>>
>> >>>>> Hi Redwane,
>> >>>>>
>> >>>>> Please run the following command as hdfs user on any datanode. The
>> >>>>> output will be something like this. Hope this helps
>> >>>>>
>> >>>>> hadoop dfsadmin -report
>> >>>>> Configured Capacity: 81075068925 (75.51 GB)
>> >>>>> Present Capacity: 70375292928 (65.54 GB)
>> >>>>> DFS Remaining: 69895163904 (65.09 GB)
>> >>>>> DFS Used: 480129024 (457.89 MB)
>> >>>>> DFS Used%: 0.68%
>> >>>>> Under replicated blocks: 0
>> >>>>> Blocks with corrupt replicas: 0
>> >>>>> Missing blocks: 0
>> >>>>>
>> >>>>> Thanks
>> >>>>> -Abdelrahman
>> >>>>>
>> >>>>>
>> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985
>> >>>>> <re...@googlemail.com>wrote:
>> >>>>>
>> >>>>>>
>> >>>>>> I have my hosts running on openstack virtual machine instances each
>> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
>> >>>>>> is in
>> >>>>>> the hdfs without web ui .
>> >>>>>>
>> >>>>>>
>> >>>>>> Sent from Samsung Mobile
>> >>>>>>
>> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>> >>>>>> Check web ui how much space you have on hdfs???
>> >>>>>>
>> >>>>>> Sent from my iPhone
>> >>>>>>
>> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>> >>>>>> ashettia@hortonworks.com> wrote:
>> >>>>>>
>> >>>>>> Hi Redwane ,
>> >>>>>>
>> >>>>>> It is possible that the hosts which are running tasks are do not
>> >>>>>> have
>> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>> >>>>>> reduno1985@googlemail.com> wrote:
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ---------- Forwarded message ----------
>> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>> >>>>>>> Subject: About running a simple wordcount mapreduce
>> >>>>>>> To: mapreduce-issues@hadoop.apache.org
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Hi
>> >>>>>>> I am trying to run  a wordcount mapreduce job on several files
>> >>>>>>> (<20
>> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>> >>>>>>> The jobtracker log file shows the following warning:
>> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map
>> >>>>>>> task.
>> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
>> >>>>>>> map to
>> >>take
>> >>>>>>> 1317624576693539401
>> >>>>>>>
>> >>>>>>> Please help me ,
>> >>>>>>> Best Regards,
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >
>> >
>> > Matteo Lanati
>> > Distributed Resources Group
>> > Leibniz-Rechenzentrum (LRZ)
>> > Boltzmannstrasse 1
>> > 85748 Garching b. München (Germany)
>> > Phone: +49 89 35831 8724
>>
>>
>>
>> --
>> Harsh J
>
>



-- 
Harsh J

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
HI Harsh,

Quick question though: why do you think it only happens if the OP 'uses
security' as he mentioned?

Regards,
Shahab


On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

Re:

Posted by Azuryy Yu <az...@gmail.com>.
This should be fixed in hadoop-1.1.2 stable release.
if we determine completedMapsInputSize is zero, then job's map tasks MUST
be zero, so the estimated output size is zero.
below is the code:

  long getEstimatedMapOutputSize() {
    long estimate = 0L;
    if (job.desiredMaps() > 0) {
      estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
    }
    return estimate;
  }



On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Harsh,

thanks for the quick investigation.
This seems to fit my case: the job is just submitted but stuck at 0%.
Bye,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Harsh J [harsh@cloudera.com]
Sent: 01 June 2013 17:50
To: <us...@hadoop.apache.org>
Subject: Re:

Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J

Re:

Posted by Azuryy Yu <az...@gmail.com>.
This should be fixed in hadoop-1.1.2 stable release.
if we determine completedMapsInputSize is zero, then job's map tasks MUST
be zero, so the estimated output size is zero.
below is the code:

  long getEstimatedMapOutputSize() {
    long estimate = 0L;
    if (job.desiredMaps() > 0) {
      estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
    }
    return estimate;
  }



On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
HI Harsh,

Quick question though: why do you think it only happens if the OP 'uses
security' as he mentioned?

Regards,
Shahab


On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
HI Harsh,

Quick question though: why do you think it only happens if the OP 'uses
security' as he mentioned?

Regards,
Shahab


On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Harsh,

thanks for the quick investigation.
This seems to fit my case: the job is just submitted but stuck at 0%.
Bye,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Harsh J [harsh@cloudera.com]
Sent: 01 June 2013 17:50
To: <us...@hadoop.apache.org>
Subject: Re:

Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J

Re:

Posted by Azuryy Yu <az...@gmail.com>.
This should be fixed in hadoop-1.1.2 stable release.
if we determine completedMapsInputSize is zero, then job's map tasks MUST
be zero, so the estimated output size is zero.
below is the code:

  long getEstimatedMapOutputSize() {
    long estimate = 0L;
    if (job.desiredMaps() > 0) {
      estimate = getEstimatedTotalMapOutputSize()  / job.desiredMaps();
    }
    return estimate;
  }



On Sat, Jun 1, 2013 at 11:49 PM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

RE: Re:

Posted by "Lanati, Matteo" <Ma...@lrz.de>.
Hi Harsh,

thanks for the quick investigation.
This seems to fit my case: the job is just submitted but stuck at 0%.
Bye,

Matteo

Matteo Lanati
Distributed Resources Group
Leibniz-Rechenzentrum (LRZ)
Boltzmannstrasse 1
85748 Garching b. München (Germany)
Phone: +49 89 35831 8724

________________________________________
From: Harsh J [harsh@cloudera.com]
Sent: 01 June 2013 17:50
To: <us...@hadoop.apache.org>
Subject: Re:

Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
HI Harsh,

Quick question though: why do you think it only happens if the OP 'uses
security' as he mentioned?

Regards,
Shahab


On Sat, Jun 1, 2013 at 11:49 AM, Harsh J <ha...@cloudera.com> wrote:

> Does smell like a bug as that number you get is simply Long.MAX_VALUE,
> or 8 exbibytes.
>
> Looking at the sources, this turns out to be a rather funny Java issue
> (there's a divide by zero happening and [1] suggests Long.MAX_VALUE
> return in such a case). I've logged a bug report for this at
> https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
> reproducible case.
>
> Does this happen consistently for you?
>
> [1]
> http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)
>
> On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de>
> wrote:
> > Hi all,
> >
> > I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
> >
> > 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
> >
> > The logfile is attached, together with the configuration files. The
> version I'm using is
> >
> > Hadoop 1.2.0
> > Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> > Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> > From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> > This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
> >
> > If I run the default configuration (i.e. no securty), then the job
> succeeds.
> >
> > Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
> >
> > Thanks in advance.
> >
> > Matteo
> >
> >
> >
> >>Which version of Hadoop are you using. A quick search shows me a bug
> >>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >>similar symptoms. However, that was fixed a long while ago.
> >>
> >>
> >>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >>reduno1985@googlemail.com> wrote:
> >>
> >>> This the content of the jobtracker log file :
> >>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000000 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000001 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000002 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000003 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000004 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000005 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> tip:task_201303231139_0001_m_000006 has split on
> >>> node:/default-rack/hadoop0.novalocal
> >>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress:
> Job
> >>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >>> reduce tasks.
> >>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker:
> Adding
> >>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >>> task_201303231139_0001_m_000008, for tracker
> >>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >>> 'attempt_201303231139_0001_m_000008_0' has completed
> >>> task_201303231139_0001_m_000008 successfully.
> >>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >>> expect map to take 1317624576693539401
> >>>
> >>>
> >>> The value in we excpect map to take is too huge   1317624576693539401
> >>> bytes  !!!!!!!
> >>>
> >>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >>> reduno1985@googlemail.com> wrote:
> >>>
> >>>> The estimated value that the hadoop compute is too huge for the simple
> >>>> example that i am running .
> >>>>
> >>>> ---------- Forwarded message ----------
> >>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>>> Subject: Re: About running a simple wordcount mapreduce
> >>>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>>
> >>>>
> >>>> This the output that I get I am running two machines  as you can see
>  do
> >>>> u see anything suspicious ?
> >>>> Configured Capacity: 21145698304 (19.69 GB)
> >>>> Present Capacity: 17615499264 (16.41 GB)
> >>>> DFS Remaining: 17615441920 (16.41 GB)
> >>>> DFS Used: 57344 (56 KB)
> >>>> DFS Used%: 0%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> -------------------------------------------------
> >>>> Datanodes available: 2 (2 total, 0 dead)
> >>>>
> >>>> Name: 11.1.0.6:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765019648 (1.64 GB)
> >>>> DFS Remaining: 8807800832(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.31%
> >>>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>>
> >>>>
> >>>> Name: 11.1.0.3:50010
> >>>> Decommission Status : Normal
> >>>> Configured Capacity: 10572849152 (9.85 GB)
> >>>> DFS Used: 28672 (28 KB)
> >>>> Non DFS Used: 1765179392 (1.64 GB)
> >>>> DFS Remaining: 8807641088(8.2 GB)
> >>>> DFS Used%: 0%
> >>>> DFS Remaining%: 83.3%
> >>>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>>> ashettia@hortonworks.com> wrote:
> >>>>
> >>>>> Hi Redwane,
> >>>>>
> >>>>> Please run the following command as hdfs user on any datanode. The
> >>>>> output will be something like this. Hope this helps
> >>>>>
> >>>>> hadoop dfsadmin -report
> >>>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>>> Present Capacity: 70375292928 (65.54 GB)
> >>>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>>> DFS Used: 480129024 (457.89 MB)
> >>>>> DFS Used%: 0.68%
> >>>>> Under replicated blocks: 0
> >>>>> Blocks with corrupt replicas: 0
> >>>>> Missing blocks: 0
> >>>>>
> >>>>> Thanks
> >>>>> -Abdelrahman
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>>
> >>>>>>
> >>>>>> I have my hosts running on openstack virtual machine instances each
> >>>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>>> the hdfs without web ui .
> >>>>>>
> >>>>>>
> >>>>>> Sent from Samsung Mobile
> >>>>>>
> >>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>>> Check web ui how much space you have on hdfs???
> >>>>>>
> >>>>>> Sent from my iPhone
> >>>>>>
> >>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>>> ashettia@hortonworks.com> wrote:
> >>>>>>
> >>>>>> Hi Redwane ,
> >>>>>>
> >>>>>> It is possible that the hosts which are running tasks are do not
> have
> >>>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>>> reduno1985@googlemail.com> wrote:
> >>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ---------- Forwarded message ----------
> >>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi
> >>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>>> The jobtracker log file shows the following warning:
> >>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect
> map to
> >>take
> >>>>>>> 1317624576693539401
> >>>>>>>
> >>>>>>> Please help me ,
> >>>>>>> Best Regards,
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>
> >
> >
> > Matteo Lanati
> > Distributed Resources Group
> > Leibniz-Rechenzentrum (LRZ)
> > Boltzmannstrasse 1
> > 85748 Garching b. München (Germany)
> > Phone: +49 89 35831 8724
>
>
>
> --
> Harsh J
>

Re:

Posted by Harsh J <ha...@cloudera.com>.
Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
It seems to me that as it is failing when you try to run with security
turned on, then in those cases data cannot be written to the disk due to
permissions (as you have security turned on) and when you run it without
security then possible no such checks are performed and you can write data?
I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi all,
>
> I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The
> version I'm using is
>
> Hadoop 1.2.0
> Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job
> succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
> >Which version of Hadoop are you using. A quick search shows me a bug
> >https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >similar symptoms. However, that was fixed a long while ago.
> >
> >
> >On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >reduno1985@googlemail.com> wrote:
> >
> >> This the content of the jobtracker log file :
> >> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000000 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000001 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000002 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000003 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000004 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000005 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000006 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
> >> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> reduce tasks.
> >> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
> >> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> task_201303231139_0001_m_000008, for tracker
> >> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >> 'attempt_201303231139_0001_m_000008_0' has completed
> >> task_201303231139_0001_m_000008 successfully.
> >> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >>
> >>
> >> The value in we excpect map to take is too huge   1317624576693539401
> >> bytes  !!!!!!!
> >>
> >> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> reduno1985@googlemail.com> wrote:
> >>
> >>> The estimated value that the hadoop compute is too huge for the simple
> >>> example that i am running .
> >>>
> >>> ---------- Forwarded message ----------
> >>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>> Subject: Re: About running a simple wordcount mapreduce
> >>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>
> >>>
> >>> This the output that I get I am running two machines  as you can see
>  do
> >>> u see anything suspicious ?
> >>> Configured Capacity: 21145698304 (19.69 GB)
> >>> Present Capacity: 17615499264 (16.41 GB)
> >>> DFS Remaining: 17615441920 (16.41 GB)
> >>> DFS Used: 57344 (56 KB)
> >>> DFS Used%: 0%
> >>> Under replicated blocks: 0
> >>> Blocks with corrupt replicas: 0
> >>> Missing blocks: 0
> >>>
> >>> -------------------------------------------------
> >>> Datanodes available: 2 (2 total, 0 dead)
> >>>
> >>> Name: 11.1.0.6:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765019648 (1.64 GB)
> >>> DFS Remaining: 8807800832(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.31%
> >>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>
> >>>
> >>> Name: 11.1.0.3:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765179392 (1.64 GB)
> >>> DFS Remaining: 8807641088(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.3%
> >>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>
> >>>
> >>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>> ashettia@hortonworks.com> wrote:
> >>>
> >>>> Hi Redwane,
> >>>>
> >>>> Please run the following command as hdfs user on any datanode. The
> >>>> output will be something like this. Hope this helps
> >>>>
> >>>> hadoop dfsadmin -report
> >>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>> Present Capacity: 70375292928 (65.54 GB)
> >>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>> DFS Used: 480129024 (457.89 MB)
> >>>> DFS Used%: 0.68%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> Thanks
> >>>> -Abdelrahman
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>
> >>>>>
> >>>>> I have my hosts running on openstack virtual machine instances each
> >>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>> the hdfs without web ui .
> >>>>>
> >>>>>
> >>>>> Sent from Samsung Mobile
> >>>>>
> >>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>> Check web ui how much space you have on hdfs???
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> >>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>> ashettia@hortonworks.com> wrote:
> >>>>>
> >>>>> Hi Redwane ,
> >>>>>
> >>>>> It is possible that the hosts which are running tasks are do not have
> >>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>> reduno1985@googlemail.com> wrote:
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> ---------- Forwarded message ----------
> >>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>
> >>>>>>
> >>>>>> Hi
> >>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>> The jobtracker log file shows the following warning:
> >>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map
> to
> >take
> >>>>>> 1317624576693539401
> >>>>>>
> >>>>>> Please help me ,
> >>>>>> Best Regards,
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724

Re:

Posted by Harsh J <ha...@cloudera.com>.
Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
It seems to me that as it is failing when you try to run with security
turned on, then in those cases data cannot be written to the disk due to
permissions (as you have security turned on) and when you run it without
security then possible no such checks are performed and you can write data?
I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi all,
>
> I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The
> version I'm using is
>
> Hadoop 1.2.0
> Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job
> succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
> >Which version of Hadoop are you using. A quick search shows me a bug
> >https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >similar symptoms. However, that was fixed a long while ago.
> >
> >
> >On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >reduno1985@googlemail.com> wrote:
> >
> >> This the content of the jobtracker log file :
> >> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000000 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000001 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000002 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000003 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000004 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000005 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000006 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
> >> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> reduce tasks.
> >> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
> >> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> task_201303231139_0001_m_000008, for tracker
> >> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >> 'attempt_201303231139_0001_m_000008_0' has completed
> >> task_201303231139_0001_m_000008 successfully.
> >> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >>
> >>
> >> The value in we excpect map to take is too huge   1317624576693539401
> >> bytes  !!!!!!!
> >>
> >> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> reduno1985@googlemail.com> wrote:
> >>
> >>> The estimated value that the hadoop compute is too huge for the simple
> >>> example that i am running .
> >>>
> >>> ---------- Forwarded message ----------
> >>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>> Subject: Re: About running a simple wordcount mapreduce
> >>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>
> >>>
> >>> This the output that I get I am running two machines  as you can see
>  do
> >>> u see anything suspicious ?
> >>> Configured Capacity: 21145698304 (19.69 GB)
> >>> Present Capacity: 17615499264 (16.41 GB)
> >>> DFS Remaining: 17615441920 (16.41 GB)
> >>> DFS Used: 57344 (56 KB)
> >>> DFS Used%: 0%
> >>> Under replicated blocks: 0
> >>> Blocks with corrupt replicas: 0
> >>> Missing blocks: 0
> >>>
> >>> -------------------------------------------------
> >>> Datanodes available: 2 (2 total, 0 dead)
> >>>
> >>> Name: 11.1.0.6:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765019648 (1.64 GB)
> >>> DFS Remaining: 8807800832(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.31%
> >>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>
> >>>
> >>> Name: 11.1.0.3:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765179392 (1.64 GB)
> >>> DFS Remaining: 8807641088(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.3%
> >>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>
> >>>
> >>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>> ashettia@hortonworks.com> wrote:
> >>>
> >>>> Hi Redwane,
> >>>>
> >>>> Please run the following command as hdfs user on any datanode. The
> >>>> output will be something like this. Hope this helps
> >>>>
> >>>> hadoop dfsadmin -report
> >>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>> Present Capacity: 70375292928 (65.54 GB)
> >>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>> DFS Used: 480129024 (457.89 MB)
> >>>> DFS Used%: 0.68%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> Thanks
> >>>> -Abdelrahman
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>
> >>>>>
> >>>>> I have my hosts running on openstack virtual machine instances each
> >>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>> the hdfs without web ui .
> >>>>>
> >>>>>
> >>>>> Sent from Samsung Mobile
> >>>>>
> >>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>> Check web ui how much space you have on hdfs???
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> >>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>> ashettia@hortonworks.com> wrote:
> >>>>>
> >>>>> Hi Redwane ,
> >>>>>
> >>>>> It is possible that the hosts which are running tasks are do not have
> >>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>> reduno1985@googlemail.com> wrote:
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> ---------- Forwarded message ----------
> >>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>
> >>>>>>
> >>>>>> Hi
> >>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>> The jobtracker log file shows the following warning:
> >>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map
> to
> >take
> >>>>>> 1317624576693539401
> >>>>>>
> >>>>>> Please help me ,
> >>>>>> Best Regards,
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724

Re:

Posted by Shahab Yunus <sh...@gmail.com>.
It seems to me that as it is failing when you try to run with security
turned on, then in those cases data cannot be written to the disk due to
permissions (as you have security turned on) and when you run it without
security then possible no such checks are performed and you can write data?
I don't know, I am not sure, it is just a hunch after your latest comment.

Regards,
Shahab


On Sat, Jun 1, 2013 at 9:57 AM, Lanati, Matteo <Ma...@lrz.de> wrote:

> Hi all,
>
> I stumbled upon this problem as well while trying to run the default
> wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual
> machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is
> used as JT+NN, the other as TT+DN. Security is enabled. The input file is
> about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No
> room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we
> expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The
> version I'm using is
>
> Hadoop 1.2.0
> Subversion
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r
> 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using
> /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job
> succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible
> that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
> >Which version of Hadoop are you using. A quick search shows me a bug
> >https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
> >similar symptoms. However, that was fixed a long while ago.
> >
> >
> >On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
> >reduno1985@googlemail.com> wrote:
> >
> >> This the content of the jobtracker log file :
> >> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress:
> Input
> >> size for job job_201303231139_0001 = 6950001. Number of splits = 7
> >> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000000 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000001 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000002 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000003 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000004 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000005 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
> >> tip:task_201303231139_0001_m_000006 has split on
> >> node:/default-rack/hadoop0.novalocal
> >> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
> >> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
> >> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
> >> job_201303231139_0001 initialized successfully with 7 map tasks and 1
> >> reduce tasks.
> >> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
> >> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
> >> task_201303231139_0001_m_000008, for tracker
> >> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
> >> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress:
> Task
> >> 'attempt_201303231139_0001_m_000008_0' has completed
> >> task_201303231139_0001_m_000008 successfully.
> >> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop0.novalocal has 8791543808 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
> >> room for map task. Node hadoop1.novalocal has 8807518208 bytes free;
> but we
> >> expect map to take 1317624576693539401
> >>
> >>
> >> The value in we excpect map to take is too huge   1317624576693539401
> >> bytes  !!!!!!!
> >>
> >> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
> >> reduno1985@googlemail.com> wrote:
> >>
> >>> The estimated value that the hadoop compute is too huge for the simple
> >>> example that i am running .
> >>>
> >>> ---------- Forwarded message ----------
> >>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>  Date: Sat, Mar 23, 2013 at 11:32 AM
> >>> Subject: Re: About running a simple wordcount mapreduce
> >>> To: Abdelrahman Shettia <as...@hortonworks.com>
> >>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
> >>>
> >>>
> >>> This the output that I get I am running two machines  as you can see
>  do
> >>> u see anything suspicious ?
> >>> Configured Capacity: 21145698304 (19.69 GB)
> >>> Present Capacity: 17615499264 (16.41 GB)
> >>> DFS Remaining: 17615441920 (16.41 GB)
> >>> DFS Used: 57344 (56 KB)
> >>> DFS Used%: 0%
> >>> Under replicated blocks: 0
> >>> Blocks with corrupt replicas: 0
> >>> Missing blocks: 0
> >>>
> >>> -------------------------------------------------
> >>> Datanodes available: 2 (2 total, 0 dead)
> >>>
> >>> Name: 11.1.0.6:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765019648 (1.64 GB)
> >>> DFS Remaining: 8807800832(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.31%
> >>> Last contact: Sat Mar 23 11:30:10 CET 2013
> >>>
> >>>
> >>> Name: 11.1.0.3:50010
> >>> Decommission Status : Normal
> >>> Configured Capacity: 10572849152 (9.85 GB)
> >>> DFS Used: 28672 (28 KB)
> >>> Non DFS Used: 1765179392 (1.64 GB)
> >>> DFS Remaining: 8807641088(8.2 GB)
> >>> DFS Used%: 0%
> >>> DFS Remaining%: 83.3%
> >>> Last contact: Sat Mar 23 11:30:08 CET 2013
> >>>
> >>>
> >>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
> >>> ashettia@hortonworks.com> wrote:
> >>>
> >>>> Hi Redwane,
> >>>>
> >>>> Please run the following command as hdfs user on any datanode. The
> >>>> output will be something like this. Hope this helps
> >>>>
> >>>> hadoop dfsadmin -report
> >>>> Configured Capacity: 81075068925 (75.51 GB)
> >>>> Present Capacity: 70375292928 (65.54 GB)
> >>>> DFS Remaining: 69895163904 (65.09 GB)
> >>>> DFS Used: 480129024 (457.89 MB)
> >>>> DFS Used%: 0.68%
> >>>> Under replicated blocks: 0
> >>>> Blocks with corrupt replicas: 0
> >>>> Missing blocks: 0
> >>>>
> >>>> Thanks
> >>>> -Abdelrahman
> >>>>
> >>>>
> >>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <
> reduno1985@googlemail.com>wrote:
> >>>>
> >>>>>
> >>>>> I have my hosts running on openstack virtual machine instances each
> >>>>> instance has 10gb hard disc . Is there a way too see how much space
> is in
> >>>>> the hdfs without web ui .
> >>>>>
> >>>>>
> >>>>> Sent from Samsung Mobile
> >>>>>
> >>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
> >>>>> Check web ui how much space you have on hdfs???
> >>>>>
> >>>>> Sent from my iPhone
> >>>>>
> >>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
> >>>>> ashettia@hortonworks.com> wrote:
> >>>>>
> >>>>> Hi Redwane ,
> >>>>>
> >>>>> It is possible that the hosts which are running tasks are do not have
> >>>>> enough space. Those dirs are confiugred in mapred-site.xml
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
> >>>>> reduno1985@googlemail.com> wrote:
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> ---------- Forwarded message ----------
> >>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
> >>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
> >>>>>> Subject: About running a simple wordcount mapreduce
> >>>>>> To: mapreduce-issues@hadoop.apache.org
> >>>>>>
> >>>>>>
> >>>>>> Hi
> >>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
> >>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
> >>>>>> The jobtracker log file shows the following warning:
> >>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
> >>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map
> to
> >take
> >>>>>> 1317624576693539401
> >>>>>>
> >>>>>> Please help me ,
> >>>>>> Best Regards,
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724

Re:

Posted by Harsh J <ha...@cloudera.com>.
Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J

Re:

Posted by Harsh J <ha...@cloudera.com>.
Does smell like a bug as that number you get is simply Long.MAX_VALUE,
or 8 exbibytes.

Looking at the sources, this turns out to be a rather funny Java issue
(there's a divide by zero happening and [1] suggests Long.MAX_VALUE
return in such a case). I've logged a bug report for this at
https://issues.apache.org/jira/browse/MAPREDUCE-5288 with a
reproducible case.

Does this happen consistently for you?

[1] http://docs.oracle.com/javase/6/docs/api/java/lang/Math.html#round(double)

On Sat, Jun 1, 2013 at 7:27 PM, Lanati, Matteo <Ma...@lrz.de> wrote:
> Hi all,
>
> I stumbled upon this problem as well while trying to run the default wordcount shipped with Hadoop 1.2.0. My testbed is made up of 2 virtual machines: Debian 7, Oracle Java 7, 2 GB RAM, 25 GB hard disk. One node is used as JT+NN, the other as TT+DN. Security is enabled. The input file is about 600 kB and the error is
>
> 2013-06-01 12:22:51,999 WARN org.apache.hadoop.mapred.JobInProgress: No room for map task. Node 10.156.120.49 has 22854692864 bytes free; but we expect map to take 9223372036854775807
>
> The logfile is attached, together with the configuration files. The version I'm using is
>
> Hadoop 1.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473
> Compiled by hortonfo on Mon May  6 06:59:37 UTC 2013
> From source with checksum 2e0dac51ede113c1f2ca8e7d82fb3405
> This command was run using /home/lu95jib/hadoop-exmpl/hadoop-1.2.0/hadoop-core-1.2.0.jar
>
> If I run the default configuration (i.e. no securty), then the job succeeds.
>
> Is there something missing in how I set up my nodes? How is it possible that the envisaged value for the needed space is so big?
>
> Thanks in advance.
>
> Matteo
>
>
>
>>Which version of Hadoop are you using. A quick search shows me a bug
>>https://issues.apache.org/jira/browse/HADOOP-5241 that seems to show
>>similar symptoms. However, that was fixed a long while ago.
>>
>>
>>On Sat, Mar 23, 2013 at 4:40 PM, Redwane belmaati cherkaoui <
>>reduno1985@googlemail.com> wrote:
>>
>>> This the content of the jobtracker log file :
>>> 2013-03-23 12:06:48,912 INFO org.apache.hadoop.mapred.JobInProgress: Input
>>> size for job job_201303231139_0001 = 6950001. Number of splits = 7
>>> 2013-03-23 12:06:48,925 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000000 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,927 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000001 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,930 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000002 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,931 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000003 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,933 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000004 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,934 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000005 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,939 INFO org.apache.hadoop.mapred.JobInProgress:
>>> tip:task_201303231139_0001_m_000006 has split on
>>> node:/default-rack/hadoop0.novalocal
>>> 2013-03-23 12:06:48,950 INFO org.apache.hadoop.mapred.JobInProgress:
>>> job_201303231139_0001 LOCALITY_WAIT_FACTOR=0.5
>>> 2013-03-23 12:06:48,978 INFO org.apache.hadoop.mapred.JobInProgress: Job
>>> job_201303231139_0001 initialized successfully with 7 map tasks and 1
>>> reduce tasks.
>>> 2013-03-23 12:06:50,855 INFO org.apache.hadoop.mapred.JobTracker: Adding
>>> task (JOB_SETUP) 'attempt_201303231139_0001_m_000008_0' to tip
>>> task_201303231139_0001_m_000008, for tracker
>>> 'tracker_hadoop0.novalocal:hadoop0.novalocal/127.0.0.1:44879'
>>> 2013-03-23 12:08:00,340 INFO org.apache.hadoop.mapred.JobInProgress: Task
>>> 'attempt_201303231139_0001_m_000008_0' has completed
>>> task_201303231139_0001_m_000008 successfully.
>>> 2013-03-23 12:08:00,538 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,543 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:00,544 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop0.novalocal has 8791543808 bytes free; but we
>>> expect map to take 1317624576693539401
>>> 2013-03-23 12:08:01,264 WARN org.apache.hadoop.mapred.JobInProgress: No
>>> room for map task. Node hadoop1.novalocal has 8807518208 bytes free; but we
>>> expect map to take 1317624576693539401
>>>
>>>
>>> The value in we excpect map to take is too huge   1317624576693539401
>>> bytes  !!!!!!!
>>>
>>> On Sat, Mar 23, 2013 at 11:37 AM, Redwane belmaati cherkaoui <
>>> reduno1985@googlemail.com> wrote:
>>>
>>>> The estimated value that the hadoop compute is too huge for the simple
>>>> example that i am running .
>>>>
>>>> ---------- Forwarded message ----------
>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>  Date: Sat, Mar 23, 2013 at 11:32 AM
>>>> Subject: Re: About running a simple wordcount mapreduce
>>>> To: Abdelrahman Shettia <as...@hortonworks.com>
>>>> Cc: user@hadoop.apache.org, reduno1985 <re...@gmail.com>
>>>>
>>>>
>>>> This the output that I get I am running two machines  as you can see  do
>>>> u see anything suspicious ?
>>>> Configured Capacity: 21145698304 (19.69 GB)
>>>> Present Capacity: 17615499264 (16.41 GB)
>>>> DFS Remaining: 17615441920 (16.41 GB)
>>>> DFS Used: 57344 (56 KB)
>>>> DFS Used%: 0%
>>>> Under replicated blocks: 0
>>>> Blocks with corrupt replicas: 0
>>>> Missing blocks: 0
>>>>
>>>> -------------------------------------------------
>>>> Datanodes available: 2 (2 total, 0 dead)
>>>>
>>>> Name: 11.1.0.6:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765019648 (1.64 GB)
>>>> DFS Remaining: 8807800832(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.31%
>>>> Last contact: Sat Mar 23 11:30:10 CET 2013
>>>>
>>>>
>>>> Name: 11.1.0.3:50010
>>>> Decommission Status : Normal
>>>> Configured Capacity: 10572849152 (9.85 GB)
>>>> DFS Used: 28672 (28 KB)
>>>> Non DFS Used: 1765179392 (1.64 GB)
>>>> DFS Remaining: 8807641088(8.2 GB)
>>>> DFS Used%: 0%
>>>> DFS Remaining%: 83.3%
>>>> Last contact: Sat Mar 23 11:30:08 CET 2013
>>>>
>>>>
>>>> On Fri, Mar 22, 2013 at 10:19 PM, Abdelrahman Shettia <
>>>> ashettia@hortonworks.com> wrote:
>>>>
>>>>> Hi Redwane,
>>>>>
>>>>> Please run the following command as hdfs user on any datanode. The
>>>>> output will be something like this. Hope this helps
>>>>>
>>>>> hadoop dfsadmin -report
>>>>> Configured Capacity: 81075068925 (75.51 GB)
>>>>> Present Capacity: 70375292928 (65.54 GB)
>>>>> DFS Remaining: 69895163904 (65.09 GB)
>>>>> DFS Used: 480129024 (457.89 MB)
>>>>> DFS Used%: 0.68%
>>>>> Under replicated blocks: 0
>>>>> Blocks with corrupt replicas: 0
>>>>> Missing blocks: 0
>>>>>
>>>>> Thanks
>>>>> -Abdelrahman
>>>>>
>>>>>
>>>>> On Fri, Mar 22, 2013 at 12:35 PM, reduno1985 <re...@googlemail.com>wrote:
>>>>>
>>>>>>
>>>>>> I have my hosts running on openstack virtual machine instances each
>>>>>> instance has 10gb hard disc . Is there a way too see how much space is in
>>>>>> the hdfs without web ui .
>>>>>>
>>>>>>
>>>>>> Sent from Samsung Mobile
>>>>>>
>>>>>> Serge Blazhievsky <ha...@gmail.com> wrote:
>>>>>> Check web ui how much space you have on hdfs???
>>>>>>
>>>>>> Sent from my iPhone
>>>>>>
>>>>>> On Mar 22, 2013, at 11:41 AM, Abdelrahman Shettia <
>>>>>> ashettia@hortonworks.com> wrote:
>>>>>>
>>>>>> Hi Redwane ,
>>>>>>
>>>>>> It is possible that the hosts which are running tasks are do not have
>>>>>> enough space. Those dirs are confiugred in mapred-site.xml
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Mar 22, 2013 at 8:42 AM, Redwane belmaati cherkaoui <
>>>>>> reduno1985@googlemail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------- Forwarded message ----------
>>>>>>> From: Redwane belmaati cherkaoui <re...@googlemail.com>
>>>>>>> Date: Fri, Mar 22, 2013 at 4:39 PM
>>>>>>> Subject: About running a simple wordcount mapreduce
>>>>>>> To: mapreduce-issues@hadoop.apache.org
>>>>>>>
>>>>>>>
>>>>>>> Hi
>>>>>>> I am trying to run  a wordcount mapreduce job on several files (<20
>>>>>>> mb) using two machines . I get stuck on 0% map 0% reduce.
>>>>>>> The jobtracker log file shows the following warning:
>>>>>>>  WARN org.apache.hadoop.mapred.JobInProgress: No room for map task.
>>>>>>> Node hadoop0.novalocal has 8791384064 bytes free; but we expect map to
>>take
>>>>>>> 1317624576693539401
>>>>>>>
>>>>>>> Please help me ,
>>>>>>> Best Regards,
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>
>
> Matteo Lanati
> Distributed Resources Group
> Leibniz-Rechenzentrum (LRZ)
> Boltzmannstrasse 1
> 85748 Garching b. München (Germany)
> Phone: +49 89 35831 8724



--
Harsh J