You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Ravikant Dindokar <ra...@gmail.com> on 2015/08/04 07:05:44 UTC

Unable to run mapr-reduce pi example (hadoop 2.2.0)

Hi

I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
getting following output ..

 $ hadoop jar
hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
16 100000

Number of Maps  = 16
Samples per Map = 100000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
orion-00/192.168.0.10:8032
15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
application_1438664093458_0001 to ResourceManager at orion-00/
192.168.0.10:8032
15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
http://orion-00:8088/proxy/application_1438664093458_0001/
15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running in
uber mode : false



*15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
with state FAILED due to:
*java.io.FileNotFoundException: File does not exist:
hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
    at
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
    at
org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
    at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

When I checked the logs , I can see the following error for each slave node
:

./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
[RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
CONTACTING RM. *
./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
ERROR [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
CONTACTING RM.

I searched for this error, but couldn't get a working solution. Some
solution say that this could be possibly due to hdfs run out of disk space.
If this is the case how can I figure out disk space usage?

Please help.

Thanks
Ravikant

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by "sreebalineni ." <sr...@gmail.com>.
Did you check if input file exist in hdfs by doing ls i think here filenot
found needs attention
On 4 Aug 2015 13:09, "Ravikant Dindokar" <ra...@gmail.com> wrote:

> Hi Ashwin,
>
> On namenode, I can see Resource Manager process running.
> [on namenode]
> $ jps
> 7383 ResourceManager
> 7785 SecondaryNameNode
> 7098 NameNode
> 3634 Jps
>
>
> On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:
>
>> Looks like its not able to connect to the Resource Manager. Check if your
>> Resource Manager is configured properly, Resource Manager address.
>>
>> Thanks,
>> Ashwin
>>
>> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
>> ravikant.iisc@gmail.com> wrote:
>>
>>> Hi
>>>
>>> I am using hadoop 2.2.0. When I am trying to run pi example from jar ,
>>> am getting following output ..
>>>
>>>  $ hadoop jar
>>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>>> 16 100000
>>>
>>> Number of Maps  = 16
>>> Samples per Map = 100000
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Wrote input for Map #10
>>> Wrote input for Map #11
>>> Wrote input for Map #12
>>> Wrote input for Map #13
>>> Wrote input for Map #14
>>> Wrote input for Map #15
>>> Starting Job
>>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>>> orion-00/192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>>> application_1438664093458_0001 to ResourceManager at orion-00/
>>> 192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>>> http://orion-00:8088/proxy/application_1438664093458_0001/
>>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>>> in uber mode : false
>>>
>>>
>>>
>>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04
>>> 10:27:09 INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO
>>> mapreduce.Job:  map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map
>>> 100% reduce 0%*
>>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>>> with state FAILED due to:
>>> *java.io.FileNotFoundException: File does not exist:
>>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>>     at
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>>
>>> When I checked the logs , I can see the following error for each slave
>>> node :
>>>
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>>> [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM. *
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>>> ERROR [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM.
>>>
>>> I searched for this error, but couldn't get a working solution. Some
>>> solution say that this could be possibly due to hdfs run out of disk space.
>>> If this is the case how can I figure out disk space usage?
>>>
>>> Please help.
>>>
>>> Thanks
>>> Ravikant
>>>
>>
>>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by "sreebalineni ." <sr...@gmail.com>.
Did you check if input file exist in hdfs by doing ls i think here filenot
found needs attention
On 4 Aug 2015 13:09, "Ravikant Dindokar" <ra...@gmail.com> wrote:

> Hi Ashwin,
>
> On namenode, I can see Resource Manager process running.
> [on namenode]
> $ jps
> 7383 ResourceManager
> 7785 SecondaryNameNode
> 7098 NameNode
> 3634 Jps
>
>
> On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:
>
>> Looks like its not able to connect to the Resource Manager. Check if your
>> Resource Manager is configured properly, Resource Manager address.
>>
>> Thanks,
>> Ashwin
>>
>> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
>> ravikant.iisc@gmail.com> wrote:
>>
>>> Hi
>>>
>>> I am using hadoop 2.2.0. When I am trying to run pi example from jar ,
>>> am getting following output ..
>>>
>>>  $ hadoop jar
>>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>>> 16 100000
>>>
>>> Number of Maps  = 16
>>> Samples per Map = 100000
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Wrote input for Map #10
>>> Wrote input for Map #11
>>> Wrote input for Map #12
>>> Wrote input for Map #13
>>> Wrote input for Map #14
>>> Wrote input for Map #15
>>> Starting Job
>>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>>> orion-00/192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>>> application_1438664093458_0001 to ResourceManager at orion-00/
>>> 192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>>> http://orion-00:8088/proxy/application_1438664093458_0001/
>>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>>> in uber mode : false
>>>
>>>
>>>
>>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04
>>> 10:27:09 INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO
>>> mapreduce.Job:  map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map
>>> 100% reduce 0%*
>>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>>> with state FAILED due to:
>>> *java.io.FileNotFoundException: File does not exist:
>>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>>     at
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>>
>>> When I checked the logs , I can see the following error for each slave
>>> node :
>>>
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>>> [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM. *
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>>> ERROR [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM.
>>>
>>> I searched for this error, but couldn't get a working solution. Some
>>> solution say that this could be possibly due to hdfs run out of disk space.
>>> If this is the case how can I figure out disk space usage?
>>>
>>> Please help.
>>>
>>> Thanks
>>> Ravikant
>>>
>>
>>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by "sreebalineni ." <sr...@gmail.com>.
Did you check if input file exist in hdfs by doing ls i think here filenot
found needs attention
On 4 Aug 2015 13:09, "Ravikant Dindokar" <ra...@gmail.com> wrote:

> Hi Ashwin,
>
> On namenode, I can see Resource Manager process running.
> [on namenode]
> $ jps
> 7383 ResourceManager
> 7785 SecondaryNameNode
> 7098 NameNode
> 3634 Jps
>
>
> On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:
>
>> Looks like its not able to connect to the Resource Manager. Check if your
>> Resource Manager is configured properly, Resource Manager address.
>>
>> Thanks,
>> Ashwin
>>
>> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
>> ravikant.iisc@gmail.com> wrote:
>>
>>> Hi
>>>
>>> I am using hadoop 2.2.0. When I am trying to run pi example from jar ,
>>> am getting following output ..
>>>
>>>  $ hadoop jar
>>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>>> 16 100000
>>>
>>> Number of Maps  = 16
>>> Samples per Map = 100000
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Wrote input for Map #10
>>> Wrote input for Map #11
>>> Wrote input for Map #12
>>> Wrote input for Map #13
>>> Wrote input for Map #14
>>> Wrote input for Map #15
>>> Starting Job
>>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>>> orion-00/192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>>> application_1438664093458_0001 to ResourceManager at orion-00/
>>> 192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>>> http://orion-00:8088/proxy/application_1438664093458_0001/
>>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>>> in uber mode : false
>>>
>>>
>>>
>>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04
>>> 10:27:09 INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO
>>> mapreduce.Job:  map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map
>>> 100% reduce 0%*
>>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>>> with state FAILED due to:
>>> *java.io.FileNotFoundException: File does not exist:
>>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>>     at
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>>
>>> When I checked the logs , I can see the following error for each slave
>>> node :
>>>
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>>> [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM. *
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>>> ERROR [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM.
>>>
>>> I searched for this error, but couldn't get a working solution. Some
>>> solution say that this could be possibly due to hdfs run out of disk space.
>>> If this is the case how can I figure out disk space usage?
>>>
>>> Please help.
>>>
>>> Thanks
>>> Ravikant
>>>
>>
>>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by "sreebalineni ." <sr...@gmail.com>.
Did you check if input file exist in hdfs by doing ls i think here filenot
found needs attention
On 4 Aug 2015 13:09, "Ravikant Dindokar" <ra...@gmail.com> wrote:

> Hi Ashwin,
>
> On namenode, I can see Resource Manager process running.
> [on namenode]
> $ jps
> 7383 ResourceManager
> 7785 SecondaryNameNode
> 7098 NameNode
> 3634 Jps
>
>
> On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:
>
>> Looks like its not able to connect to the Resource Manager. Check if your
>> Resource Manager is configured properly, Resource Manager address.
>>
>> Thanks,
>> Ashwin
>>
>> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
>> ravikant.iisc@gmail.com> wrote:
>>
>>> Hi
>>>
>>> I am using hadoop 2.2.0. When I am trying to run pi example from jar ,
>>> am getting following output ..
>>>
>>>  $ hadoop jar
>>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>>> 16 100000
>>>
>>> Number of Maps  = 16
>>> Samples per Map = 100000
>>> Wrote input for Map #0
>>> Wrote input for Map #1
>>> Wrote input for Map #2
>>> Wrote input for Map #3
>>> Wrote input for Map #4
>>> Wrote input for Map #5
>>> Wrote input for Map #6
>>> Wrote input for Map #7
>>> Wrote input for Map #8
>>> Wrote input for Map #9
>>> Wrote input for Map #10
>>> Wrote input for Map #11
>>> Wrote input for Map #12
>>> Wrote input for Map #13
>>> Wrote input for Map #14
>>> Wrote input for Map #15
>>> Starting Job
>>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>>> orion-00/192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>>> application_1438664093458_0001 to ResourceManager at orion-00/
>>> 192.168.0.10:8032
>>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>>> http://orion-00:8088/proxy/application_1438664093458_0001/
>>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>>> in uber mode : false
>>>
>>>
>>>
>>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04
>>> 10:27:09 INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO
>>> mapreduce.Job:  map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map
>>> 100% reduce 0%*
>>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>>> with state FAILED due to:
>>> *java.io.FileNotFoundException: File does not exist:
>>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>>     at
>>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>>     at
>>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>>
>>> When I checked the logs , I can see the following error for each slave
>>> node :
>>>
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>>> [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM. *
>>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>>> ERROR [RMCommunicator Allocator]
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>>> CONTACTING RM.
>>>
>>> I searched for this error, but couldn't get a working solution. Some
>>> solution say that this could be possibly due to hdfs run out of disk space.
>>> If this is the case how can I figure out disk space usage?
>>>
>>> Please help.
>>>
>>> Thanks
>>> Ravikant
>>>
>>
>>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by Ravikant Dindokar <ra...@gmail.com>.
Hi Ashwin,

On namenode, I can see Resource Manager process running.
[on namenode]
$ jps
7383 ResourceManager
7785 SecondaryNameNode
7098 NameNode
3634 Jps


On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:

> Looks like its not able to connect to the Resource Manager. Check if your
> Resource Manager is configured properly, Resource Manager address.
>
> Thanks,
> Ashwin
>
> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
> ravikant.iisc@gmail.com> wrote:
>
>> Hi
>>
>> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
>> getting following output ..
>>
>>  $ hadoop jar
>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>> 16 100000
>>
>> Number of Maps  = 16
>> Samples per Map = 100000
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Wrote input for Map #10
>> Wrote input for Map #11
>> Wrote input for Map #12
>> Wrote input for Map #13
>> Wrote input for Map #14
>> Wrote input for Map #15
>> Starting Job
>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>> orion-00/192.168.0.10:8032
>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>> application_1438664093458_0001 to ResourceManager at orion-00/
>> 192.168.0.10:8032
>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>> http://orion-00:8088/proxy/application_1438664093458_0001/
>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>> in uber mode : false
>>
>>
>>
>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
>> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
>> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>> with state FAILED due to:
>> *java.io.FileNotFoundException: File does not exist:
>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>
>> When I checked the logs , I can see the following error for each slave
>> node :
>>
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>> [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM. *
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>> ERROR [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM.
>>
>> I searched for this error, but couldn't get a working solution. Some
>> solution say that this could be possibly due to hdfs run out of disk space.
>> If this is the case how can I figure out disk space usage?
>>
>> Please help.
>>
>> Thanks
>> Ravikant
>>
>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by Ravikant Dindokar <ra...@gmail.com>.
Hi Ashwin,

On namenode, I can see Resource Manager process running.
[on namenode]
$ jps
7383 ResourceManager
7785 SecondaryNameNode
7098 NameNode
3634 Jps


On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:

> Looks like its not able to connect to the Resource Manager. Check if your
> Resource Manager is configured properly, Resource Manager address.
>
> Thanks,
> Ashwin
>
> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
> ravikant.iisc@gmail.com> wrote:
>
>> Hi
>>
>> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
>> getting following output ..
>>
>>  $ hadoop jar
>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>> 16 100000
>>
>> Number of Maps  = 16
>> Samples per Map = 100000
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Wrote input for Map #10
>> Wrote input for Map #11
>> Wrote input for Map #12
>> Wrote input for Map #13
>> Wrote input for Map #14
>> Wrote input for Map #15
>> Starting Job
>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>> orion-00/192.168.0.10:8032
>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>> application_1438664093458_0001 to ResourceManager at orion-00/
>> 192.168.0.10:8032
>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>> http://orion-00:8088/proxy/application_1438664093458_0001/
>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>> in uber mode : false
>>
>>
>>
>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
>> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
>> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>> with state FAILED due to:
>> *java.io.FileNotFoundException: File does not exist:
>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>
>> When I checked the logs , I can see the following error for each slave
>> node :
>>
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>> [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM. *
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>> ERROR [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM.
>>
>> I searched for this error, but couldn't get a working solution. Some
>> solution say that this could be possibly due to hdfs run out of disk space.
>> If this is the case how can I figure out disk space usage?
>>
>> Please help.
>>
>> Thanks
>> Ravikant
>>
>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by Ravikant Dindokar <ra...@gmail.com>.
Hi Ashwin,

On namenode, I can see Resource Manager process running.
[on namenode]
$ jps
7383 ResourceManager
7785 SecondaryNameNode
7098 NameNode
3634 Jps


On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:

> Looks like its not able to connect to the Resource Manager. Check if your
> Resource Manager is configured properly, Resource Manager address.
>
> Thanks,
> Ashwin
>
> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
> ravikant.iisc@gmail.com> wrote:
>
>> Hi
>>
>> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
>> getting following output ..
>>
>>  $ hadoop jar
>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>> 16 100000
>>
>> Number of Maps  = 16
>> Samples per Map = 100000
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Wrote input for Map #10
>> Wrote input for Map #11
>> Wrote input for Map #12
>> Wrote input for Map #13
>> Wrote input for Map #14
>> Wrote input for Map #15
>> Starting Job
>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>> orion-00/192.168.0.10:8032
>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>> application_1438664093458_0001 to ResourceManager at orion-00/
>> 192.168.0.10:8032
>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>> http://orion-00:8088/proxy/application_1438664093458_0001/
>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>> in uber mode : false
>>
>>
>>
>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
>> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
>> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>> with state FAILED due to:
>> *java.io.FileNotFoundException: File does not exist:
>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>
>> When I checked the logs , I can see the following error for each slave
>> node :
>>
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>> [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM. *
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>> ERROR [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM.
>>
>> I searched for this error, but couldn't get a working solution. Some
>> solution say that this could be possibly due to hdfs run out of disk space.
>> If this is the case how can I figure out disk space usage?
>>
>> Please help.
>>
>> Thanks
>> Ravikant
>>
>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by Ravikant Dindokar <ra...@gmail.com>.
Hi Ashwin,

On namenode, I can see Resource Manager process running.
[on namenode]
$ jps
7383 ResourceManager
7785 SecondaryNameNode
7098 NameNode
3634 Jps


On Tue, Aug 4, 2015 at 12:07 PM, James Bond <bo...@gmail.com> wrote:

> Looks like its not able to connect to the Resource Manager. Check if your
> Resource Manager is configured properly, Resource Manager address.
>
> Thanks,
> Ashwin
>
> On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <
> ravikant.iisc@gmail.com> wrote:
>
>> Hi
>>
>> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
>> getting following output ..
>>
>>  $ hadoop jar
>> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
>> 16 100000
>>
>> Number of Maps  = 16
>> Samples per Map = 100000
>> Wrote input for Map #0
>> Wrote input for Map #1
>> Wrote input for Map #2
>> Wrote input for Map #3
>> Wrote input for Map #4
>> Wrote input for Map #5
>> Wrote input for Map #6
>> Wrote input for Map #7
>> Wrote input for Map #8
>> Wrote input for Map #9
>> Wrote input for Map #10
>> Wrote input for Map #11
>> Wrote input for Map #12
>> Wrote input for Map #13
>> Wrote input for Map #14
>> Wrote input for Map #15
>> Starting Job
>> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
>> orion-00/192.168.0.10:8032
>> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
>> application_1438664093458_0001 to ResourceManager at orion-00/
>> 192.168.0.10:8032
>> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
>> http://orion-00:8088/proxy/application_1438664093458_0001/
>> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
>> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
>> in uber mode : false
>>
>>
>>
>> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
>> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
>> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
>> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
>> with state FAILED due to:
>> *java.io.FileNotFoundException: File does not exist:
>> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>
>> When I checked the logs , I can see the following error for each slave
>> node :
>>
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
>> [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM. *
>> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
>> ERROR [RMCommunicator Allocator]
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
>> CONTACTING RM.
>>
>> I searched for this error, but couldn't get a working solution. Some
>> solution say that this could be possibly due to hdfs run out of disk space.
>> If this is the case how can I figure out disk space usage?
>>
>> Please help.
>>
>> Thanks
>> Ravikant
>>
>
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by James Bond <bo...@gmail.com>.
Looks like its not able to connect to the Resource Manager. Check if your
Resource Manager is configured properly, Resource Manager address.

Thanks,
Ashwin

On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <ra...@gmail.com>
wrote:

> Hi
>
> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
> getting following output ..
>
>  $ hadoop jar
> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
> 16 100000
>
> Number of Maps  = 16
> Samples per Map = 100000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
> orion-00/192.168.0.10:8032
> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
> application_1438664093458_0001 to ResourceManager at orion-00/
> 192.168.0.10:8032
> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
> http://orion-00:8088/proxy/application_1438664093458_0001/
> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
> in uber mode : false
>
>
>
> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
> with state FAILED due to:
> *java.io.FileNotFoundException: File does not exist:
> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> When I checked the logs , I can see the following error for each slave
> node :
>
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
> [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM. *
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
> ERROR [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM.
>
> I searched for this error, but couldn't get a working solution. Some
> solution say that this could be possibly due to hdfs run out of disk space.
> If this is the case how can I figure out disk space usage?
>
> Please help.
>
> Thanks
> Ravikant
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by James Bond <bo...@gmail.com>.
Looks like its not able to connect to the Resource Manager. Check if your
Resource Manager is configured properly, Resource Manager address.

Thanks,
Ashwin

On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <ra...@gmail.com>
wrote:

> Hi
>
> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
> getting following output ..
>
>  $ hadoop jar
> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
> 16 100000
>
> Number of Maps  = 16
> Samples per Map = 100000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
> orion-00/192.168.0.10:8032
> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
> application_1438664093458_0001 to ResourceManager at orion-00/
> 192.168.0.10:8032
> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
> http://orion-00:8088/proxy/application_1438664093458_0001/
> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
> in uber mode : false
>
>
>
> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
> with state FAILED due to:
> *java.io.FileNotFoundException: File does not exist:
> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> When I checked the logs , I can see the following error for each slave
> node :
>
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
> [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM. *
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
> ERROR [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM.
>
> I searched for this error, but couldn't get a working solution. Some
> solution say that this could be possibly due to hdfs run out of disk space.
> If this is the case how can I figure out disk space usage?
>
> Please help.
>
> Thanks
> Ravikant
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by James Bond <bo...@gmail.com>.
Looks like its not able to connect to the Resource Manager. Check if your
Resource Manager is configured properly, Resource Manager address.

Thanks,
Ashwin

On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <ra...@gmail.com>
wrote:

> Hi
>
> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
> getting following output ..
>
>  $ hadoop jar
> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
> 16 100000
>
> Number of Maps  = 16
> Samples per Map = 100000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
> orion-00/192.168.0.10:8032
> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
> application_1438664093458_0001 to ResourceManager at orion-00/
> 192.168.0.10:8032
> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
> http://orion-00:8088/proxy/application_1438664093458_0001/
> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
> in uber mode : false
>
>
>
> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
> with state FAILED due to:
> *java.io.FileNotFoundException: File does not exist:
> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> When I checked the logs , I can see the following error for each slave
> node :
>
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
> [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM. *
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
> ERROR [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM.
>
> I searched for this error, but couldn't get a working solution. Some
> solution say that this could be possibly due to hdfs run out of disk space.
> If this is the case how can I figure out disk space usage?
>
> Please help.
>
> Thanks
> Ravikant
>

Re: Unable to run mapr-reduce pi example (hadoop 2.2.0)

Posted by James Bond <bo...@gmail.com>.
Looks like its not able to connect to the Resource Manager. Check if your
Resource Manager is configured properly, Resource Manager address.

Thanks,
Ashwin

On Tue, Aug 4, 2015 at 10:35 AM, Ravikant Dindokar <ra...@gmail.com>
wrote:

> Hi
>
> I am using hadoop 2.2.0. When I am trying to run pi example from jar , am
> getting following output ..
>
>  $ hadoop jar
> hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi
> 16 100000
>
> Number of Maps  = 16
> Samples per Map = 100000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 15/08/04 10:26:49 INFO client.RMProxy: Connecting to ResourceManager at
> orion-00/192.168.0.10:8032
> 15/08/04 10:26:50 INFO impl.YarnClientImpl: Submitted application
> application_1438664093458_0001 to ResourceManager at orion-00/
> 192.168.0.10:8032
> 15/08/04 10:26:50 INFO mapreduce.Job: The url to track the job:
> http://orion-00:8088/proxy/application_1438664093458_0001/
> 15/08/04 10:26:50 INFO mapreduce.Job: Running job: job_1438664093458_0001
> 15/08/04 10:27:01 INFO mapreduce.Job: Job job_1438664093458_0001 running
> in uber mode : false
>
>
>
> *15/08/04 10:27:01 INFO mapreduce.Job:  map 0% reduce 0%15/08/04 10:27:09
> INFO mapreduce.Job:  map 6% reduce 0%15/08/04 10:27:10 INFO mapreduce.Job:
> map 25% reduce 0%15/08/04 10:27:11 INFO mapreduce.Job:  map 100% reduce 0%*
> 15/08/04 10:33:11 INFO mapreduce.Job: Job job_1438664093458_0001 failed
> with state FAILED due to:
> *java.io.FileNotFoundException: File does not exist:
> hdfs://orion-00:9000/user/bduser/QuasiMonteCarlo_1438664204371_1877107485/out/reduce-out*
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> When I checked the logs , I can see the following error for each slave
> node :
>
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:09,755 *ERROR
> [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM. *
> ./container_1438664093458_0001_01_000001/syslog:2015-08-04 10:27:10,770
> ERROR [RMCommunicator Allocator]
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN
> CONTACTING RM.
>
> I searched for this error, but couldn't get a working solution. Some
> solution say that this could be possibly due to hdfs run out of disk space.
> If this is the case how can I figure out disk space usage?
>
> Please help.
>
> Thanks
> Ravikant
>