You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Sujit Dhamale <su...@gmail.com> on 2012/04/02 16:30:35 UTC

Re: getting NullPointerException while running Word cont example

Can some one please look in to below issue ??
Thanks in Advance

On Wed, Mar 7, 2012 at 9:09 AM, Sujit Dhamale <su...@gmail.com>wrote:

> Hadoop version : hadoop-0.20.203.0rc1.tar
> Operaring Syatem : ubuntu 11.10
>
>
>
> On Wed, Mar 7, 2012 at 12:19 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> Hi Sujit,
>>
>> Please also tell us which version/distribution of Hadoop is this?
>>
>> On Tue, Mar 6, 2012 at 11:27 PM, Sujit Dhamale <su...@gmail.com>
>> wrote:
>> > Hi,
>> >
>> > I am new to Hadoop., i install Hadoop as per
>> >
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>> <
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluste
>> >
>> >
>> >
>> > while running Word cont example i am getting *NullPointerException
>> >
>> > *can some one please look in to this issue ?*
>> >
>> > *Thanks in Advance*  !!!
>> >
>> > *
>> >
>> >
>> > duser@sujit:~/Desktop/hadoop$ bin/hadoop dfs -ls /user/hduser/data
>> > Found 3 items
>> > -rw-r--r--   1 hduser supergroup     674566 2012-03-06 23:04
>> > /user/hduser/data/pg20417.txt
>> > -rw-r--r--   1 hduser supergroup    1573150 2012-03-06 23:04
>> > /user/hduser/data/pg4300.txt
>> > -rw-r--r--   1 hduser supergroup    1423801 2012-03-06 23:04
>> > /user/hduser/data/pg5000.txt
>> >
>> > hduser@sujit:~/Desktop/hadoop$ bin/hadoop jar hadoop*examples*.jar
>> > wordcount /user/hduser/data /user/hduser/gutenberg-outputd
>> >
>> > 12/03/06 23:14:33 INFO input.FileInputFormat: Total input paths to
>> process
>> > : 3
>> > 12/03/06 23:14:33 INFO mapred.JobClient: Running job:
>> job_201203062221_0002
>> > 12/03/06 23:14:34 INFO mapred.JobClient:  map 0% reduce 0%
>> > 12/03/06 23:14:49 INFO mapred.JobClient:  map 66% reduce 0%
>> > 12/03/06 23:14:55 INFO mapred.JobClient:  map 100% reduce 0%
>> > 12/03/06 23:14:58 INFO mapred.JobClient: Task Id :
>> > attempt_201203062221_0002_r_000000_0, Status : FAILED
>> > Error: java.lang.NullPointerException
>> >    at
>> > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>> >    at
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2900)
>> >    at
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2820)
>> >
>> > 12/03/06 23:15:07 INFO mapred.JobClient: Task Id :
>> > attempt_201203062221_0002_r_000000_1, Status : FAILED
>> > Error: java.lang.NullPointerException
>> >    at
>> > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>> >    at
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2900)
>> >    at
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2820)
>> >
>> > 12/03/06 23:15:16 INFO mapred.JobClient: Task Id :
>> > attempt_201203062221_0002_r_000000_2, Status : FAILED
>> > Error: java.lang.NullPointerException
>> >    at
>> > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
>> >    at
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2900)
>> >    at
>> >
>> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2820)
>> >
>> > 12/03/06 23:15:31 INFO mapred.JobClient: Job complete:
>> job_201203062221_0002
>> > 12/03/06 23:15:31 INFO mapred.JobClient: Counters: 20
>> > 12/03/06 23:15:31 INFO mapred.JobClient:   Job Counters
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Launched reduce tasks=4
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=22084
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Total time spent by all
>> > reduces waiting after reserving slots (ms)=0
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Total time spent by all
>> maps
>> > waiting after reserving slots (ms)=0
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Launched map tasks=3
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Data-local map tasks=3
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Failed reduce tasks=1
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=16799
>> > 12/03/06 23:15:31 INFO mapred.JobClient:   FileSystemCounters
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     FILE_BYTES_READ=740520
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     HDFS_BYTES_READ=3671863
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=2278287
>> > 12/03/06 23:15:31 INFO mapred.JobClient:   File Input Format Counters
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Bytes Read=3671517
>> > 12/03/06 23:15:31 INFO mapred.JobClient:   Map-Reduce Framework
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map output materialized
>> > bytes=1474341
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Combine output
>> records=102322
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map input records=77932
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Spilled Records=153640
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map output bytes=6076095
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Combine input
>> records=629172
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map output records=629172
>> > 12/03/06 23:15:31 INFO mapred.JobClient:     SPLIT_RAW_BYTES=346
>> > hduser@sujit:~/Desktop/hadoop$
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: getting NullPointerException while running Word cont example

Posted by kasi subrahmanyam <ka...@gmail.com>.
Hi Sujit,

I think it is a problem with the host names configuration.
Could you please check whether you added the host names of the master and
the slaves in the etc/hosts file of all the nodes.


On Mon, Apr 2, 2012 at 8:00 PM, Sujit Dhamale <su...@gmail.com>wrote:

> Can some one please look in to below issue ??
> Thanks in Advance
>
> On Wed, Mar 7, 2012 at 9:09 AM, Sujit Dhamale <sujitdhamale89@gmail.com
> >wrote:
>
> > Hadoop version : hadoop-0.20.203.0rc1.tar
> > Operaring Syatem : ubuntu 11.10
> >
> >
> >
> > On Wed, Mar 7, 2012 at 12:19 AM, Harsh J <ha...@cloudera.com> wrote:
> >
> >> Hi Sujit,
> >>
> >> Please also tell us which version/distribution of Hadoop is this?
> >>
> >> On Tue, Mar 6, 2012 at 11:27 PM, Sujit Dhamale <
> sujitdhamale89@gmail.com>
> >> wrote:
> >> > Hi,
> >> >
> >> > I am new to Hadoop., i install Hadoop as per
> >> >
> >>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
> >> <
> >>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluste
> >> >
> >> >
> >> >
> >> > while running Word cont example i am getting *NullPointerException
> >> >
> >> > *can some one please look in to this issue ?*
> >> >
> >> > *Thanks in Advance*  !!!
> >> >
> >> > *
> >> >
> >> >
> >> > duser@sujit:~/Desktop/hadoop$ bin/hadoop dfs -ls /user/hduser/data
> >> > Found 3 items
> >> > -rw-r--r--   1 hduser supergroup     674566 2012-03-06 23:04
> >> > /user/hduser/data/pg20417.txt
> >> > -rw-r--r--   1 hduser supergroup    1573150 2012-03-06 23:04
> >> > /user/hduser/data/pg4300.txt
> >> > -rw-r--r--   1 hduser supergroup    1423801 2012-03-06 23:04
> >> > /user/hduser/data/pg5000.txt
> >> >
> >> > hduser@sujit:~/Desktop/hadoop$ bin/hadoop jar hadoop*examples*.jar
> >> > wordcount /user/hduser/data /user/hduser/gutenberg-outputd
> >> >
> >> > 12/03/06 23:14:33 INFO input.FileInputFormat: Total input paths to
> >> process
> >> > : 3
> >> > 12/03/06 23:14:33 INFO mapred.JobClient: Running job:
> >> job_201203062221_0002
> >> > 12/03/06 23:14:34 INFO mapred.JobClient:  map 0% reduce 0%
> >> > 12/03/06 23:14:49 INFO mapred.JobClient:  map 66% reduce 0%
> >> > 12/03/06 23:14:55 INFO mapred.JobClient:  map 100% reduce 0%
> >> > 12/03/06 23:14:58 INFO mapred.JobClient: Task Id :
> >> > attempt_201203062221_0002_r_000000_0, Status : FAILED
> >> > Error: java.lang.NullPointerException
> >> >    at
> >> > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
> >> >    at
> >> >
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2900)
> >> >    at
> >> >
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2820)
> >> >
> >> > 12/03/06 23:15:07 INFO mapred.JobClient: Task Id :
> >> > attempt_201203062221_0002_r_000000_1, Status : FAILED
> >> > Error: java.lang.NullPointerException
> >> >    at
> >> > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
> >> >    at
> >> >
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2900)
> >> >    at
> >> >
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2820)
> >> >
> >> > 12/03/06 23:15:16 INFO mapred.JobClient: Task Id :
> >> > attempt_201203062221_0002_r_000000_2, Status : FAILED
> >> > Error: java.lang.NullPointerException
> >> >    at
> >> > java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:768)
> >> >    at
> >> >
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2900)
> >> >    at
> >> >
> >>
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2820)
> >> >
> >> > 12/03/06 23:15:31 INFO mapred.JobClient: Job complete:
> >> job_201203062221_0002
> >> > 12/03/06 23:15:31 INFO mapred.JobClient: Counters: 20
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:   Job Counters
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Launched reduce tasks=4
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=22084
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Total time spent by all
> >> > reduces waiting after reserving slots (ms)=0
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Total time spent by all
> >> maps
> >> > waiting after reserving slots (ms)=0
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Launched map tasks=3
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Data-local map tasks=3
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Failed reduce tasks=1
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:
> SLOTS_MILLIS_REDUCES=16799
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:   FileSystemCounters
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     FILE_BYTES_READ=740520
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     HDFS_BYTES_READ=3671863
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:
> FILE_BYTES_WRITTEN=2278287
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:   File Input Format Counters
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Bytes Read=3671517
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:   Map-Reduce Framework
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map output materialized
> >> > bytes=1474341
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Combine output
> >> records=102322
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map input records=77932
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Spilled Records=153640
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map output bytes=6076095
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Combine input
> >> records=629172
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     Map output records=629172
> >> > 12/03/06 23:15:31 INFO mapred.JobClient:     SPLIT_RAW_BYTES=346
> >> > hduser@sujit:~/Desktop/hadoop$
> >>
> >>
> >>
> >> --
> >> Harsh J
> >>
> >
> >
>