You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by sharath jagannath <sh...@gmail.com> on 2011/02/02 18:55:50 UTC

Connect to hadoop: failure

Hey All,

I am trying the clustering quick-start tutorial but I am not able to connect
to the hadoop.
Stack trace:

HADOOP_CONF_DIR=/Users/sjagannath/hadoop/conf
11/02/02 09:20:13 WARN driver.MahoutDriver: No
org.apache.mahout.clustering.syntheticcontrol.canopy.Job.props found on
classpath, will use command-line arguments only
11/02/02 09:20:13 INFO canopy.Job: Running with default arguments
11/02/02 09:20:14 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 0 time(s).
11/02/02 09:20:15 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 1 time(s).
11/02/02 09:20:16 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 2 time(s).
11/02/02 09:20:17 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 3 time(s).
11/02/02 09:20:18 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 4 time(s).
11/02/02 09:20:19 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 5 time(s).
11/02/02 09:20:20 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 6 time(s).
11/02/02 09:20:21 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 7 time(s).
11/02/02 09:20:22 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 8 time(s).
11/02/02 09:20:23 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9002. Already tried 9 time(s).
Exception in thread "main" java.net.ConnectException: Call to localhost/
127.0.0.1:9002 failed on connection exception: java.net.ConnectException:
Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
at org.apache.mahout.common.HadoopUtil.overwriteOutput(HadoopUtil.java:38)
at
org.apache.mahout.clustering.syntheticcontrol.canopy.Job.main(Job.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:184)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 27 more


Any idea how I should resolve it.
-- 
Thanks,
Sharath

Re: Connect to hadoop: failure

Posted by sharath jagannath <sh...@gmail.com>.
Oh yeah my bad, for some reason namenode is not started.
jps prompted it. I overlooked it :D
Examples are working now :D
Thank you everybody.

Cheers,
Sharath

On Wed, Feb 2, 2011 at 10:12 AM, sharath jagannath <
sharathjagannath@gmail.com> wrote:

> Thanks Sean,
>
> I have enabled local ssh.
> I will post this question on hadoop forum.
>
> Thanks,
> Sharath
>
>
> On Wed, Feb 2, 2011 at 10:10 AM, Sean Owen <sr...@gmail.com> wrote:
>
>> This is really a Hadoop question, not Mahout.
>> It's not finding your local Hadoop cluster. One guess is that you haven't
>> enabled local ssh login if you're using a Mac. I think that's needed for
>> it
>> to work?
>>
>> On Wed, Feb 2, 2011 at 6:07 PM, sharath jagannath <
>> sharathjagannath@gmail.com> wrote:
>>
>> > I am running hadoop in a pseudo-distributed mode:
>> >
>> > jps o/p:
>> > 2164 JobTracker
>> > 2238 TaskTracker
>> > 2115 SecondaryNameNode
>> > 2015 DataNode
>> > 4619 Jps
>> >
>> >
>>
>

Re: Connect to hadoop: failure

Posted by sharath jagannath <sh...@gmail.com>.
Thanks Sean,

I have enabled local ssh.
I will post this question on hadoop forum.

Thanks,
Sharath

On Wed, Feb 2, 2011 at 10:10 AM, Sean Owen <sr...@gmail.com> wrote:

> This is really a Hadoop question, not Mahout.
> It's not finding your local Hadoop cluster. One guess is that you haven't
> enabled local ssh login if you're using a Mac. I think that's needed for it
> to work?
>
> On Wed, Feb 2, 2011 at 6:07 PM, sharath jagannath <
> sharathjagannath@gmail.com> wrote:
>
> > I am running hadoop in a pseudo-distributed mode:
> >
> > jps o/p:
> > 2164 JobTracker
> > 2238 TaskTracker
> > 2115 SecondaryNameNode
> > 2015 DataNode
> > 4619 Jps
> >
> >
>

Re: Connect to hadoop: failure

Posted by Sean Owen <sr...@gmail.com>.
This is really a Hadoop question, not Mahout.
It's not finding your local Hadoop cluster. One guess is that you haven't
enabled local ssh login if you're using a Mac. I think that's needed for it
to work?

On Wed, Feb 2, 2011 at 6:07 PM, sharath jagannath <
sharathjagannath@gmail.com> wrote:

> I am running hadoop in a pseudo-distributed mode:
>
> jps o/p:
> 2164 JobTracker
> 2238 TaskTracker
> 2115 SecondaryNameNode
> 2015 DataNode
> 4619 Jps
>
>

Re: Connect to hadoop: failure

Posted by sharath jagannath <sh...@gmail.com>.
I am running hadoop in a pseudo-distributed mode:

jps o/p:
2164 JobTracker
2238 TaskTracker
2115 SecondaryNameNode
2015 DataNode
4619 Jps


Thanks,
Sharath

On Wed, Feb 2, 2011 at 10:05 AM, Lokendra Singh <ls...@gmail.com>wrote:

> Hi,
>
> Seems something is wrong with hdfs on your hadoop cluster.
> Can you try running 'jps' on master and slave nodes to check the 'datanode'
> running on slaves and 'namenode', 'secondarynamenode' running on master
>  nodes. You can also check the logs inside $HADOOP_HOME/logs/ for any
> problems with cluster
>
> Regards
> Lokendra
>
>
> On Wed, Feb 2, 2011 at 11:25 PM, sharath jagannath <
> sharathjagannath@gmail.com> wrote:
>
> > Hey All,
> >
> > I am trying the clustering quick-start tutorial but I am not able to
> > connect
> > to the hadoop.
> > Stack trace:
> >
> > HADOOP_CONF_DIR=/Users/sjagannath/hadoop/conf
> > 11/02/02 09:20:13 WARN driver.MahoutDriver: No
> > org.apache.mahout.clustering.syntheticcontrol.canopy.Job.props found on
> > classpath, will use command-line arguments only
> > 11/02/02 09:20:13 INFO canopy.Job: Running with default arguments
> > 11/02/02 09:20:14 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 0 time(s).
> > 11/02/02 09:20:15 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 1 time(s).
> > 11/02/02 09:20:16 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 2 time(s).
> > 11/02/02 09:20:17 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 3 time(s).
> > 11/02/02 09:20:18 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 4 time(s).
> > 11/02/02 09:20:19 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 5 time(s).
> > 11/02/02 09:20:20 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 6 time(s).
> > 11/02/02 09:20:21 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 7 time(s).
> > 11/02/02 09:20:22 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 8 time(s).
> > 11/02/02 09:20:23 INFO ipc.Client: Retrying connect to server: localhost/
> > 127.0.0.1:9002. Already tried 9 time(s).
> > Exception in thread "main" java.net.ConnectException: Call to localhost/
> > 127.0.0.1:9002 failed on connection exception:
> java.net.ConnectException:
> > Connection refused
> > at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
> > at org.apache.hadoop.ipc.Client.call(Client.java:743)
> > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> > at $Proxy0.getProtocolVersion(Unknown Source)
> > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> > at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
> > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
> > at
> >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
> > at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
> > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
> > at
> org.apache.mahout.common.HadoopUtil.overwriteOutput(HadoopUtil.java:38)
> > at
> >
> org.apache.mahout.clustering.syntheticcontrol.canopy.Job.main(Job.java:53)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at
> >
> >
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> > at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> > at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:184)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> > Caused by: java.net.ConnectException: Connection refused
> > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> > at
> >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> > at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
> > at org.apache.hadoop.ipc.Client.call(Client.java:720)
> > ... 27 more
> >
> >
> > Any idea how I should resolve it.
> > --
> > Thanks,
> > Sharath
> >
>



-- 
Thanks,
Sharath Jagannath

Re: Connect to hadoop: failure

Posted by Lokendra Singh <ls...@gmail.com>.
Hi,

Seems something is wrong with hdfs on your hadoop cluster.
Can you try running 'jps' on master and slave nodes to check the 'datanode'
running on slaves and 'namenode', 'secondarynamenode' running on master
 nodes. You can also check the logs inside $HADOOP_HOME/logs/ for any
problems with cluster

Regards
Lokendra


On Wed, Feb 2, 2011 at 11:25 PM, sharath jagannath <
sharathjagannath@gmail.com> wrote:

> Hey All,
>
> I am trying the clustering quick-start tutorial but I am not able to
> connect
> to the hadoop.
> Stack trace:
>
> HADOOP_CONF_DIR=/Users/sjagannath/hadoop/conf
> 11/02/02 09:20:13 WARN driver.MahoutDriver: No
> org.apache.mahout.clustering.syntheticcontrol.canopy.Job.props found on
> classpath, will use command-line arguments only
> 11/02/02 09:20:13 INFO canopy.Job: Running with default arguments
> 11/02/02 09:20:14 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 0 time(s).
> 11/02/02 09:20:15 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 1 time(s).
> 11/02/02 09:20:16 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 2 time(s).
> 11/02/02 09:20:17 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 3 time(s).
> 11/02/02 09:20:18 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 4 time(s).
> 11/02/02 09:20:19 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 5 time(s).
> 11/02/02 09:20:20 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 6 time(s).
> 11/02/02 09:20:21 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 7 time(s).
> 11/02/02 09:20:22 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 8 time(s).
> 11/02/02 09:20:23 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 9 time(s).
> Exception in thread "main" java.net.ConnectException: Call to localhost/
> 127.0.0.1:9002 failed on connection exception: java.net.ConnectException:
> Connection refused
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
> at org.apache.hadoop.ipc.Client.call(Client.java:743)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at $Proxy0.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
> at org.apache.mahout.common.HadoopUtil.overwriteOutput(HadoopUtil.java:38)
> at
> org.apache.mahout.clustering.syntheticcontrol.canopy.Job.main(Job.java:53)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
>
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
> at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:184)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> at
>
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
> at org.apache.hadoop.ipc.Client.call(Client.java:720)
> ... 27 more
>
>
> Any idea how I should resolve it.
> --
> Thanks,
> Sharath
>

Re: Connect to hadoop: failure

Posted by sharath jagannath <sh...@gmail.com>.
But I am able to run it locally:

no HADOOP_HOME set, running locally
Feb 2, 2011 10:02:40 AM org.slf4j.impl.JCLLoggerAdapter warn
WARNING: No org.apache.mahout.clustering.syntheticcontrol.canopy.Job.props
found on classpath, will use command-line arguments only
Feb 2, 2011 10:02:40 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Running with default arguments
Feb 2, 2011 10:02:40 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting output
Feb 2, 2011 10:02:40 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
Feb 2, 2011 10:02:40 AM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Feb 2, 2011 10:02:40 AM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Feb 2, 2011 10:02:41 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0001
Feb 2, 2011 10:02:41 AM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0001_m_000000_0 is done. And is in the process of
commiting
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0001_m_000000_0 is allowed to commit now
Feb 2, 2011 10:02:42 AM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0001_m_000000_0' to output/data
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0001_m_000000_0' done.
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO:  map 100% reduce 0%
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0001
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO:   FileSystemCounters
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_READ=10550598
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_WRITTEN=10694246
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO:   Map-Reduce Framework
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO:     Map input records=600
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO:     Spilled Records=0
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.Counters log
INFO:     Map output records=600
Feb 2, 2011 10:02:42 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Build Clusters Input: output/data Out: output Measure:
org.apache.mahout.common.distance.EuclideanDistanceMeasure@5773ec72 t1: 80.0
t2: 55.0
Feb 2, 2011 10:02:42 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Feb 2, 2011 10:02:42 AM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Feb 2, 2011 10:02:43 AM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Feb 2, 2011 10:02:43 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0002
Feb 2, 2011 10:02:43 AM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Feb 2, 2011 10:02:43 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Feb 2, 2011 10:02:43 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Feb 2, 2011 10:02:43 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO:  map 0% reduce 0%
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0002_m_000000_0 is done. And is in the process of
commiting
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0002_m_000000_0' done.
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 13902
bytes
Feb 2, 2011 10:02:44 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0002_r_000000_0 is done. And is in the process of
commiting
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0002_r_000000_0 is allowed to commit now
Feb 2, 2011 10:02:45 AM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0002_r_000000_0' to
output/clusters-0
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0002_r_000000_0' done.
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO:  map 100% reduce 100%
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0002
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:   FileSystemCounters
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_READ=39326144
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_WRITTEN=39124429
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:   Map-Reduce Framework
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Reduce input groups=1
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Combine output records=0
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Map input records=600
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Reduce shuffle bytes=0
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Reduce output records=6
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Spilled Records=50
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Map output bytes=13800
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Combine input records=0
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Map output records=25
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.Counters log
INFO:     Reduce input records=25
Feb 2, 2011 10:02:45 AM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Feb 2, 2011 10:02:45 AM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Feb 2, 2011 10:02:45 AM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Feb 2, 2011 10:02:46 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0003
Feb 2, 2011 10:02:46 AM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Feb 2, 2011 10:02:46 AM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0003_m_000000_0 is done. And is in the process of
commiting
Feb 2, 2011 10:02:46 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Feb 2, 2011 10:02:46 AM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0003_m_000000_0 is allowed to commit now
Feb 2, 2011 10:02:46 AM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0003_m_000000_0' to
output/clusteredPoints
Feb 2, 2011 10:02:46 AM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: Emit Closest Canopy ID:C-5
Feb 2, 2011 10:02:46 AM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0003_m_000000_0' done.
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO:  map 100% reduce 0%
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0003
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO:   FileSystemCounters
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_READ=28782455
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO:     FILE_BYTES_WRITTEN=28759236
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO:   Map-Reduce Framework
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO:     Map input records=600
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO:     Spilled Records=0
Feb 2, 2011 10:02:47 AM org.apache.hadoop.mapred.Counters log
INFO:     Map output records=600
C-0{n=21 c=[29.552, 33.073, 35.876, 36.375, 35.118,



On Wed, Feb 2, 2011 at 9:55 AM, sharath jagannath <
sharathjagannath@gmail.com> wrote:

> Hey All,
>
> I am trying the clustering quick-start tutorial but I am not able to
> connect to the hadoop.
> Stack trace:
>
> HADOOP_CONF_DIR=/Users/sjagannath/hadoop/conf
> 11/02/02 09:20:13 WARN driver.MahoutDriver: No
> org.apache.mahout.clustering.syntheticcontrol.canopy.Job.props found on
> classpath, will use command-line arguments only
> 11/02/02 09:20:13 INFO canopy.Job: Running with default arguments
> 11/02/02 09:20:14 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 0 time(s).
> 11/02/02 09:20:15 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 1 time(s).
> 11/02/02 09:20:16 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 2 time(s).
> 11/02/02 09:20:17 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 3 time(s).
> 11/02/02 09:20:18 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 4 time(s).
> 11/02/02 09:20:19 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 5 time(s).
> 11/02/02 09:20:20 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 6 time(s).
> 11/02/02 09:20:21 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 7 time(s).
> 11/02/02 09:20:22 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 8 time(s).
> 11/02/02 09:20:23 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:9002. Already tried 9 time(s).
> Exception in thread "main" java.net.ConnectException: Call to localhost/
> 127.0.0.1:9002 failed on connection exception: java.net.ConnectException:
> Connection refused
>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
> at org.apache.hadoop.ipc.Client.call(Client.java:743)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at $Proxy0.getProtocolVersion(Unknown Source)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
> at org.apache.mahout.common.HadoopUtil.overwriteOutput(HadoopUtil.java:38)
>  at
> org.apache.mahout.clustering.syntheticcontrol.canopy.Job.main(Job.java:53)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>  at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>  at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
> at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:184)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
>  at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
>  at org.apache.hadoop.ipc.Client.call(Client.java:720)
> ... 27 more
>
>
> Any idea how I should resolve it.
> --
> Thanks,
> Sharath
>



-- 
Thanks,
Sharath Jagannath