You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Da Zheng <zh...@gmail.com> on 2010/12/29 19:20:51 UTC
documentation of hadoop implementation
Hello,
Is the implementation of Hadoop documented somewhere? especially that part where
the output of mappers is partitioned, sorted and spilled to the disk. I tried to
understand it, but it's rather complex. Is there any document that can help me
understand it?
Thanks,
Da
Re: documentation of hadoop implementation
Posted by Da Zheng <zh...@gmail.com>.
if you want the slides, maybe you can see it here
http://www.slideshare.net/hadoopusergroup/ordered-record-collection?from=ss_embed.
hope it can help.
Da
On 01/03/2011 11:53 AM, bharath vissapragada wrote:
> Any idea ..how to download this?? .. It isn't buffering correctly :/
>
> On Thu, Dec 30, 2010 at 9:00 PM, Mark Kerzner<ma...@gmail.com> wrote:
>> Thanks, Da, this makes you a better Googler, and an expert one.
>>
>> Cheers,
>> Mark
>>
>> On Thu, Dec 30, 2010 at 9:25 AM, Da Zheng<zh...@gmail.com> wrote:
>>
>>> there is someone else like me who had problems to find it:-) I thought I
>>> was the
>>> only one who had the problem, so I didn't send the link.
>>>
>>> http://developer.yahoo.com/blogs/hadoop/posts/2010/01/hadoop_bay_area_january_2010_u/
>>>
>>> Best,
>>> Da
>>>
>>> On 12/30/10 12:24 AM, Mark Kerzner wrote:
>>>> Da, where did you find it?
>>>>
>>>> Thank you,
>>>> Mark
>>>>
>>>> On Wed, Dec 29, 2010 at 11:22 PM, Da Zheng<zh...@gmail.com>
>>> wrote:
>>>>> Hi Todd,
>>>>>
>>>>> It's exactly what I was looking for. Thanks.
>>>>>
>>>>> Best,
>>>>> Da
>>>>>
>>>>> On 12/29/10 5:02 PM, Todd Lipcon wrote:
>>>>>> Hi Da,
>>>>>>
>>>>>> Chris Douglas had an excellent presentation at the Hadoop User Group
>>> last
>>>>>> year on just this topic. Maybe you can find his slides or a recording
>>> on
>>>>>> YDN/google?
>>>>>>
>>>>>> -Todd
>>>>>>
>>>>>> On Wed, Dec 29, 2010 at 10:20 AM, Da Zheng<zh...@gmail.com>
>>>>> wrote:
>>>>>>> Hello,
>>>>>>>
>>>>>>> Is the implementation of Hadoop documented somewhere? especially that
>>>>> part
>>>>>>> where
>>>>>>> the output of mappers is partitioned, sorted and spilled to the disk.
>>> I
>>>>>>> tried to
>>>>>>> understand it, but it's rather complex. Is there any document that can
>>>>> help
>>>>>>> me
>>>>>>> understand it?
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Da
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>
Re: documentation of hadoop implementation
Posted by bharath vissapragada <bh...@students.iiit.ac.in>.
Any idea ..how to download this?? .. It isn't buffering correctly :/
On Thu, Dec 30, 2010 at 9:00 PM, Mark Kerzner <ma...@gmail.com> wrote:
> Thanks, Da, this makes you a better Googler, and an expert one.
>
> Cheers,
> Mark
>
> On Thu, Dec 30, 2010 at 9:25 AM, Da Zheng <zh...@gmail.com> wrote:
>
>> there is someone else like me who had problems to find it:-) I thought I
>> was the
>> only one who had the problem, so I didn't send the link.
>>
>> http://developer.yahoo.com/blogs/hadoop/posts/2010/01/hadoop_bay_area_january_2010_u/
>>
>> Best,
>> Da
>>
>> On 12/30/10 12:24 AM, Mark Kerzner wrote:
>> > Da, where did you find it?
>> >
>> > Thank you,
>> > Mark
>> >
>> > On Wed, Dec 29, 2010 at 11:22 PM, Da Zheng <zh...@gmail.com>
>> wrote:
>> >
>> >> Hi Todd,
>> >>
>> >> It's exactly what I was looking for. Thanks.
>> >>
>> >> Best,
>> >> Da
>> >>
>> >> On 12/29/10 5:02 PM, Todd Lipcon wrote:
>> >>> Hi Da,
>> >>>
>> >>> Chris Douglas had an excellent presentation at the Hadoop User Group
>> last
>> >>> year on just this topic. Maybe you can find his slides or a recording
>> on
>> >>> YDN/google?
>> >>>
>> >>> -Todd
>> >>>
>> >>> On Wed, Dec 29, 2010 at 10:20 AM, Da Zheng <zh...@gmail.com>
>> >> wrote:
>> >>>
>> >>>> Hello,
>> >>>>
>> >>>> Is the implementation of Hadoop documented somewhere? especially that
>> >> part
>> >>>> where
>> >>>> the output of mappers is partitioned, sorted and spilled to the disk.
>> I
>> >>>> tried to
>> >>>> understand it, but it's rather complex. Is there any document that can
>> >> help
>> >>>> me
>> >>>> understand it?
>> >>>>
>> >>>> Thanks,
>> >>>> Da
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>
>> >>
>> >
>>
>>
>
Re: Retrying connect to server
Posted by maha <ma...@umail.ucsb.edu>.
Hi Cavus,
Please check that hadoop JobTracker and other daemons are running by typing "jps". If you see one of (JobTracker,TaskTracker,namenode,datanode) missing then you need to 'stop-all' then format the namenode and start-all again.
Maha
On Dec 30, 2010, at 7:52 AM, Cavus,M.,Fa. Post Direkt wrote:
> I process this
>
> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg gutenberg-output
>
> I get this
> Dıd anyone know why I get this Error?
>
> 10/12/30 16:48:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 0 time(s).
> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 1 time(s).
> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 2 time(s).
> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 3 time(s).
> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 4 time(s).
> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 5 time(s).
> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 6 time(s).
> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 7 time(s).
> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 8 time(s).
> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 9 time(s).
> Exception in thread "main" java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
> at org.apache.hadoop.ipc.Client.call(Client.java:908)
> at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
> at $Proxy0.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
> at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
> at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
> at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
> at org.apache.hadoop.ipc.Client.call(Client.java:885)
> ... 15 more
Re: Retrying connect to server
Posted by Esteban Gutierrez Moguel <es...@gmail.com>.
Hello Cavus,
is your Job Tracker running on localhost? It would be great if you can
provide more information about your current Hadoop setup.
cheers,
esteban.
estebangutierrez.com — twitter.com/esteban
2010/12/30 Cavus,M.,Fa. Post Direkt <M....@postdirekt.de>
> I process this
>
> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount
> gutenberg gutenberg-output
>
> I get this
> Dıd anyone know why I get this Error?
>
> 10/12/30 16:48:59 INFO security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 0 time(s).
> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 1 time(s).
> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 2 time(s).
> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 3 time(s).
> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 4 time(s).
> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 5 time(s).
> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 6 time(s).
> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 7 time(s).
> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 8 time(s).
> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 9 time(s).
> Exception in thread "main" java.net.ConnectException: Call to localhost/
> 127.0.0.1:9001 failed on connection exception: java.net.ConnectException:
> Connection refused
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
> at org.apache.hadoop.ipc.Client.call(Client.java:908)
> at
> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
> at $Proxy0.getProtocolVersion(Unknown Source)
> at
> org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
> at
> org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
> at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
> at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
> at org.apache.hadoop.ipc.Client.call(Client.java:885)
> ... 15 more
>
Re: Retrying connect to server
Posted by li ping <li...@gmail.com>.
make sure your /etc/hosts file contains the correct ip/hostname pair. This
is very important
2010/12/30 Cavus,M.,Fa. Post Direkt <M....@postdirekt.de>
> I process this
>
> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount
> gutenberg gutenberg-output
>
> I get this
> Dıd anyone know why I get this Error?
>
> 10/12/30 16:48:59 INFO security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 0 time(s).
> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 1 time(s).
> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 2 time(s).
> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 3 time(s).
> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 4 time(s).
> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 5 time(s).
> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 6 time(s).
> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 7 time(s).
> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 8 time(s).
> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9001. Already tried 9 time(s).
> Exception in thread "main" java.net.ConnectException: Call to localhost/
> 127.0.0.1:9001 failed on connection exception: java.net.ConnectException:
> Connection refused
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
> at org.apache.hadoop.ipc.Client.call(Client.java:908)
> at
> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
> at $Proxy0.getProtocolVersion(Unknown Source)
> at
> org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
> at
> org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
> at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
> at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
> at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
> at
> org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
> at org.apache.hadoop.ipc.Client.call(Client.java:885)
> ... 15 more
>
--
-----李平
Re: ClassNotFoundException
Posted by Harsh J <qw...@gmail.com>.
The answer is in your log output:
10/12/31 10:26:54 WARN mapreduce.JobSubmitter: No job jar file set.
User classes may not be found. See Job or Job#setJar(String).
Alternatively, use Job.setJarByClass(Class class);
On Fri, Dec 31, 2010 at 3:02 PM, Cavus,M.,Fa. Post Direkt
<M....@postdirekt.de> wrote:
> I look in my Jar File but I get a ClassNotFoundException why?:
--
Harsh J
www.harshj.com
ClassNotFoundException
Posted by "Cavus,M.,Fa. Post Direkt" <M....@postdirekt.de>.
I look in my Jar File but I get a ClassNotFoundException why?:
$ jar -xvf hd.jar
dekomprimiert: META-INF/MANIFEST.MF
dekomprimiert: org/postdirekt/hadoop/Map.class
dekomprimiert: org/postdirekt/hadoop/Map.java
dekomprimiert: org/postdirekt/hadoop/WordCount.class
dekomprimiert: org/postdirekt/hadoop/WordCount.java
dekomprimiert: org/postdirekt/hadoop/Reduce2.class
dekomprimiert: org/postdirekt/hadoop/Reduce2.java
$ ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount
gutenberg gutenberg-output
10/12/31 10:26:54 INFO security.Groups: Group mapping
impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
cacheTimeout=300000
10/12/31 10:26:54 WARN conf.Configuration: mapred.task.id is deprecated.
Instead, use mapreduce.task.attempt.id
10/12/31 10:26:54 WARN mapreduce.JobSubmitter: Use GenericOptionsParser
for parsing the arguments. Applications should implement Tool for the
same.
10/12/31 10:26:54 WARN mapreduce.JobSubmitter: No job jar file set.
User classes may not be found. See Job or Job#setJar(String).
10/12/31 10:26:54 INFO input.FileInputFormat: Total input paths to
process : 1
10/12/31 10:26:55 WARN conf.Configuration: mapred.map.tasks is
deprecated. Instead, use mapreduce.job.maps
10/12/31 10:26:55 INFO mapreduce.JobSubmitter: number of splits:1
10/12/31 10:26:55 INFO mapreduce.JobSubmitter: adding the following
namenodes' delegation tokens:null
10/12/31 10:26:55 INFO mapreduce.Job: Running job: job_201012311021_0002
10/12/31 10:26:56 INFO mapreduce.Job: map 0% reduce 0%
10/12/31 10:27:11 INFO mapreduce.Job: Task Id :
attempt_201012311021_0002_m_000000_0, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException:
org.postdirekt.hadoop.Map
at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1128)
at
org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContex
tImpl.java:167)
at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:612)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:328)
at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio
n.java:742)
at org.apache.hadoop.mapred.Child.main(Child.java:211)
Caused by: java.lang.ClassNotFoundException: org.postdirekt.hadoop.Map
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.m
10/12/31 10:27:24 INFO mapreduce.Job: Task Id :
attempt_201012311021_0002_m_000000_1, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException:
org.postdirekt.hadoop.Map
at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1128)
at
org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContex
tImpl.java:167)
at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:612)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:328)
at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio
n.java:742)
at org.apache.hadoop.mapred.Child.main(Child.java:211)
Caused by: java.lang.ClassNotFoundException: org.postdirekt.hadoop.Map
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.m
10/12/31 10:27:36 INFO mapreduce.Job: Task Id :
attempt_201012311021_0002_m_000000_2, Status : FAILED
java.lang.RuntimeException: java.lang.ClassNotFoundException:
org.postdirekt.hadoop.Map
at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1128)
at
org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContex
tImpl.java:167)
at
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:612)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:328)
at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio
n.java:742)
at org.apache.hadoop.mapred.Child.main(Child.java:211)
Caused by: java.lang.ClassNotFoundException: org.postdirekt.hadoop.Map
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.m
10/12/31 10:27:51 INFO mapreduce.Job: Job complete:
job_201012311021_0002
10/12/31 10:27:51 INFO mapreduce.Job: Counters: 7
Job Counters
Data-local map tasks=4
Total time spent by all maps waiting after reserving
slots (ms)=0
Total time spent by all reduces waiting after reserving
slots (ms)=0
Failed map tasks=1
SLOTS_MILLIS_MAPS=42025
SLOTS_MILLIS_REDUCES=0
Launched map tasks=4
RE: Retrying connect to server
Posted by "Cavus,M.,Fa. Post Direkt" <M....@postdirekt.de>.
Hi,
I've forgotten to start start-mapred.sh
Thanks All
-----Original Message-----
From: Cavus,M.,Fa. Post Direkt [mailto:M.Cavus@postdirekt.de]
Sent: Friday, December 31, 2010 10:20 AM
To: common-user@hadoop.apache.org
Subject: RE: Retrying connect to server
Hi,
I do get this:
$ jps
6017 DataNode
5805 NameNode
6234 SecondaryNameNode
6354 Jps
What can I do to start JobTracker?
Here my config Files:
$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the MapReduce job tracker runs
at.</description>
</property>
</configuration>
cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>The actual number of replications can be specified when the
file is created.</description>
</property>
</configuration>
$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation.
</description>
</property>
</configuration>
-----Original Message-----
From: James Seigel [mailto:james@tynt.com]
Sent: Friday, December 31, 2010 4:56 AM
To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server
Or....
3) The configuration (or lack thereof) on the machine you are trying to run this, has no idea where your DFS or JobTracker is :)
Cheers
James.
On 2010-12-30, at 8:53 PM, Adarsh Sharma wrote:
> Cavus,M.,Fa. Post Direkt wrote:
>> I process this
>>
>> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg gutenberg-output
>>
>> I get this
>> Dıd anyone know why I get this Error?
>>
>> 10/12/30 16:48:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
>> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 0 time(s).
>> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 1 time(s).
>> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 2 time(s).
>> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 3 time(s).
>> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 4 time(s).
>> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 5 time(s).
>> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 6 time(s).
>> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 7 time(s).
>> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 8 time(s).
>> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 9 time(s).
>> Exception in thread "main" java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
>> at org.apache.hadoop.ipc.Client.call(Client.java:908)
>> at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
>> at $Proxy0.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
>> at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
>> at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
>> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
>> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
>> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
>> at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
>> Caused by: java.net.ConnectException: Connection refused
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
>> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
>> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
>> at org.apache.hadoop.ipc.Client.call(Client.java:885)
>> ... 15 more
>>
> This is the most common issue occured after configuring Hadoop Cluster.
>
> Reason :
>
> 1. Your NameNode, JobTracker is not running. Verify through Web UI and jps commands.
> 2. DNS Resolution. You must have IP hostname enteries if all nodes in /etc/hosts file.
>
>
>
> Best Regards
>
> Adarsh Sharma
RE: Retrying connect to server
Posted by "Cavus,M.,Fa. Post Direkt" <M....@postdirekt.de>.
Hi,
I do get this:
$ jps
6017 DataNode
5805 NameNode
6234 SecondaryNameNode
6354 Jps
What can I do to start JobTracker?
Here my config Files:
$ cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
<description>The host and port that the MapReduce job tracker runs
at.</description>
</property>
</configuration>
cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>The actual number of replications can be specified when the
file is created.</description>
</property>
</configuration>
$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation.
</description>
</property>
</configuration>
-----Original Message-----
From: James Seigel [mailto:james@tynt.com]
Sent: Friday, December 31, 2010 4:56 AM
To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server
Or....
3) The configuration (or lack thereof) on the machine you are trying to run this, has no idea where your DFS or JobTracker is :)
Cheers
James.
On 2010-12-30, at 8:53 PM, Adarsh Sharma wrote:
> Cavus,M.,Fa. Post Direkt wrote:
>> I process this
>>
>> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg gutenberg-output
>>
>> I get this
>> Dıd anyone know why I get this Error?
>>
>> 10/12/30 16:48:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
>> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 0 time(s).
>> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 1 time(s).
>> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 2 time(s).
>> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 3 time(s).
>> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 4 time(s).
>> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 5 time(s).
>> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 6 time(s).
>> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 7 time(s).
>> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 8 time(s).
>> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 9 time(s).
>> Exception in thread "main" java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
>> at org.apache.hadoop.ipc.Client.call(Client.java:908)
>> at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
>> at $Proxy0.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
>> at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
>> at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
>> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
>> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
>> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
>> at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
>> Caused by: java.net.ConnectException: Connection refused
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
>> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
>> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
>> at org.apache.hadoop.ipc.Client.call(Client.java:885)
>> ... 15 more
>>
> This is the most common issue occured after configuring Hadoop Cluster.
>
> Reason :
>
> 1. Your NameNode, JobTracker is not running. Verify through Web UI and jps commands.
> 2. DNS Resolution. You must have IP hostname enteries if all nodes in /etc/hosts file.
>
>
>
> Best Regards
>
> Adarsh Sharma
Re: Retrying connect to server
Posted by James Seigel <ja...@tynt.com>.
Or....
3) The configuration (or lack thereof) on the machine you are trying to run this, has no idea where your DFS or JobTracker is :)
Cheers
James.
On 2010-12-30, at 8:53 PM, Adarsh Sharma wrote:
> Cavus,M.,Fa. Post Direkt wrote:
>> I process this
>>
>> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg gutenberg-output
>>
>> I get this
>> Dıd anyone know why I get this Error?
>>
>> 10/12/30 16:48:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
>> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 0 time(s).
>> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 1 time(s).
>> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 2 time(s).
>> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 3 time(s).
>> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 4 time(s).
>> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 5 time(s).
>> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 6 time(s).
>> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 7 time(s).
>> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 8 time(s).
>> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 9 time(s).
>> Exception in thread "main" java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
>> at org.apache.hadoop.ipc.Client.call(Client.java:908)
>> at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
>> at $Proxy0.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
>> at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
>> at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
>> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
>> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
>> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
>> at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
>> Caused by: java.net.ConnectException: Connection refused
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
>> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
>> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
>> at org.apache.hadoop.ipc.Client.call(Client.java:885)
>> ... 15 more
>>
> This is the most common issue occured after configuring Hadoop Cluster.
>
> Reason :
>
> 1. Your NameNode, JobTracker is not running. Verify through Web UI and jps commands.
> 2. DNS Resolution. You must have IP hostname enteries if all nodes in /etc/hosts file.
>
>
>
> Best Regards
>
> Adarsh Sharma
Re: Retrying connect to server
Posted by Adarsh Sharma <ad...@orkash.com>.
Cavus,M.,Fa. Post Direkt wrote:
> I process this
>
> ./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg gutenberg-output
>
> I get this
> Dıd anyone know why I get this Error?
>
> 10/12/30 16:48:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 0 time(s).
> 10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 1 time(s).
> 10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 2 time(s).
> 10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 3 time(s).
> 10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 4 time(s).
> 10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 5 time(s).
> 10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 6 time(s).
> 10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 7 time(s).
> 10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 8 time(s).
> 10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 9 time(s).
> Exception in thread "main" java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
> at org.apache.hadoop.ipc.Client.call(Client.java:908)
> at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
> at $Proxy0.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
> at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
> at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
> at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
> at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
> at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
> at org.apache.hadoop.ipc.Client.call(Client.java:885)
> ... 15 more
>
This is the most common issue occured after configuring Hadoop Cluster.
Reason :
1. Your NameNode, JobTracker is not running. Verify through Web UI and
jps commands.
2. DNS Resolution. You must have IP hostname enteries if all nodes in
/etc/hosts file.
Best Regards
Adarsh Sharma
Retrying connect to server
Posted by "Cavus,M.,Fa. Post Direkt" <M....@postdirekt.de>.
I process this
./hadoop jar ../../hadoopjar/hd.jar org.postdirekt.hadoop.WordCount gutenberg gutenberg-output
I get this
Dıd anyone know why I get this Error?
10/12/30 16:48:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
10/12/30 16:49:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 0 time(s).
10/12/30 16:49:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 1 time(s).
10/12/30 16:49:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 2 time(s).
10/12/30 16:49:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 3 time(s).
10/12/30 16:49:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 4 time(s).
10/12/30 16:49:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 5 time(s).
10/12/30 16:49:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 6 time(s).
10/12/30 16:49:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 7 time(s).
10/12/30 16:49:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 8 time(s).
10/12/30 16:49:10 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9001. Already tried 9 time(s).
Exception in thread "main" java.net.ConnectException: Call to localhost/127.0.0.1:9001 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:932)
at org.apache.hadoop.ipc.Client.call(Client.java:908)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:228)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:224)
at org.apache.hadoop.mapreduce.Cluster.createRPCProxy(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.createClient(Cluster.java:94)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:70)
at org.apache.hadoop.mapreduce.Job.<init>(Job.java:129)
at org.apache.hadoop.mapreduce.Job.<init>(Job.java:134)
at org.postdirekt.hadoop.WordCount.main(WordCount.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:192)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:417)
at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:207)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1025)
at org.apache.hadoop.ipc.Client.call(Client.java:885)
... 15 more
Re: documentation of hadoop implementation
Posted by Mark Kerzner <ma...@gmail.com>.
Thanks, Da, this makes you a better Googler, and an expert one.
Cheers,
Mark
On Thu, Dec 30, 2010 at 9:25 AM, Da Zheng <zh...@gmail.com> wrote:
> there is someone else like me who had problems to find it:-) I thought I
> was the
> only one who had the problem, so I didn't send the link.
>
> http://developer.yahoo.com/blogs/hadoop/posts/2010/01/hadoop_bay_area_january_2010_u/
>
> Best,
> Da
>
> On 12/30/10 12:24 AM, Mark Kerzner wrote:
> > Da, where did you find it?
> >
> > Thank you,
> > Mark
> >
> > On Wed, Dec 29, 2010 at 11:22 PM, Da Zheng <zh...@gmail.com>
> wrote:
> >
> >> Hi Todd,
> >>
> >> It's exactly what I was looking for. Thanks.
> >>
> >> Best,
> >> Da
> >>
> >> On 12/29/10 5:02 PM, Todd Lipcon wrote:
> >>> Hi Da,
> >>>
> >>> Chris Douglas had an excellent presentation at the Hadoop User Group
> last
> >>> year on just this topic. Maybe you can find his slides or a recording
> on
> >>> YDN/google?
> >>>
> >>> -Todd
> >>>
> >>> On Wed, Dec 29, 2010 at 10:20 AM, Da Zheng <zh...@gmail.com>
> >> wrote:
> >>>
> >>>> Hello,
> >>>>
> >>>> Is the implementation of Hadoop documented somewhere? especially that
> >> part
> >>>> where
> >>>> the output of mappers is partitioned, sorted and spilled to the disk.
> I
> >>>> tried to
> >>>> understand it, but it's rather complex. Is there any document that can
> >> help
> >>>> me
> >>>> understand it?
> >>>>
> >>>> Thanks,
> >>>> Da
> >>>>
> >>>
> >>>
> >>>
> >>
> >>
> >
>
>
Re: documentation of hadoop implementation
Posted by Da Zheng <zh...@gmail.com>.
there is someone else like me who had problems to find it:-) I thought I was the
only one who had the problem, so I didn't send the link.
http://developer.yahoo.com/blogs/hadoop/posts/2010/01/hadoop_bay_area_january_2010_u/
Best,
Da
On 12/30/10 12:24 AM, Mark Kerzner wrote:
> Da, where did you find it?
>
> Thank you,
> Mark
>
> On Wed, Dec 29, 2010 at 11:22 PM, Da Zheng <zh...@gmail.com> wrote:
>
>> Hi Todd,
>>
>> It's exactly what I was looking for. Thanks.
>>
>> Best,
>> Da
>>
>> On 12/29/10 5:02 PM, Todd Lipcon wrote:
>>> Hi Da,
>>>
>>> Chris Douglas had an excellent presentation at the Hadoop User Group last
>>> year on just this topic. Maybe you can find his slides or a recording on
>>> YDN/google?
>>>
>>> -Todd
>>>
>>> On Wed, Dec 29, 2010 at 10:20 AM, Da Zheng <zh...@gmail.com>
>> wrote:
>>>
>>>> Hello,
>>>>
>>>> Is the implementation of Hadoop documented somewhere? especially that
>> part
>>>> where
>>>> the output of mappers is partitioned, sorted and spilled to the disk. I
>>>> tried to
>>>> understand it, but it's rather complex. Is there any document that can
>> help
>>>> me
>>>> understand it?
>>>>
>>>> Thanks,
>>>> Da
>>>>
>>>
>>>
>>>
>>
>>
>
Re: documentation of hadoop implementation
Posted by Mark Kerzner <ma...@gmail.com>.
Da, where did you find it?
Thank you,
Mark
On Wed, Dec 29, 2010 at 11:22 PM, Da Zheng <zh...@gmail.com> wrote:
> Hi Todd,
>
> It's exactly what I was looking for. Thanks.
>
> Best,
> Da
>
> On 12/29/10 5:02 PM, Todd Lipcon wrote:
> > Hi Da,
> >
> > Chris Douglas had an excellent presentation at the Hadoop User Group last
> > year on just this topic. Maybe you can find his slides or a recording on
> > YDN/google?
> >
> > -Todd
> >
> > On Wed, Dec 29, 2010 at 10:20 AM, Da Zheng <zh...@gmail.com>
> wrote:
> >
> >> Hello,
> >>
> >> Is the implementation of Hadoop documented somewhere? especially that
> part
> >> where
> >> the output of mappers is partitioned, sorted and spilled to the disk. I
> >> tried to
> >> understand it, but it's rather complex. Is there any document that can
> help
> >> me
> >> understand it?
> >>
> >> Thanks,
> >> Da
> >>
> >
> >
> >
>
>
Re: documentation of hadoop implementation
Posted by Da Zheng <zh...@gmail.com>.
Hi Todd,
It's exactly what I was looking for. Thanks.
Best,
Da
On 12/29/10 5:02 PM, Todd Lipcon wrote:
> Hi Da,
>
> Chris Douglas had an excellent presentation at the Hadoop User Group last
> year on just this topic. Maybe you can find his slides or a recording on
> YDN/google?
>
> -Todd
>
> On Wed, Dec 29, 2010 at 10:20 AM, Da Zheng <zh...@gmail.com> wrote:
>
>> Hello,
>>
>> Is the implementation of Hadoop documented somewhere? especially that part
>> where
>> the output of mappers is partitioned, sorted and spilled to the disk. I
>> tried to
>> understand it, but it's rather complex. Is there any document that can help
>> me
>> understand it?
>>
>> Thanks,
>> Da
>>
>
>
>
Re: documentation of hadoop implementation
Posted by Todd Lipcon <to...@cloudera.com>.
Hi Da,
Chris Douglas had an excellent presentation at the Hadoop User Group last
year on just this topic. Maybe you can find his slides or a recording on
YDN/google?
-Todd
On Wed, Dec 29, 2010 at 10:20 AM, Da Zheng <zh...@gmail.com> wrote:
> Hello,
>
> Is the implementation of Hadoop documented somewhere? especially that part
> where
> the output of mappers is partitioned, sorted and spilled to the disk. I
> tried to
> understand it, but it's rather complex. Is there any document that can help
> me
> understand it?
>
> Thanks,
> Da
>
--
Todd Lipcon
Software Engineer, Cloudera