You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by mallik arjun <ma...@gmail.com> on 2013/03/10 03:05:16 UTC

hadoop cluster not working

hai guys i am using hadoop version 1.0.3 , it was ran well before. even now
 if use >hadoop fs -ls these commands well but when i use the commands like
>hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap

the cluster is not processing the job, what might be the problem, please
help me, when i see logs,nothing in the logs. please help me it is very
urget.

thanks in advance.

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
finally my cluster is running well see the following things

Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ sudo nano /etc/sysctl.conf
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ hadoop dfsadmin -safemode get
Warning: $HADOOP_HOME is deprecated.

Safe mode is ON
mallik@ubuntu:~$ hadoop dfsadmin -safemode leave
Warning: $HADOOP_HOME is deprecated.

Safe mode is OFF
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$
mallik@ubuntu:~$
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

13/03/12 23:44:38 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
13/03/12 23:44:38 INFO input.FileInputFormat: Total input paths to process
: 1
13/03/12 23:44:38 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/03/12 23:44:38 WARN snappy.LoadSnappy: Snappy native library not loaded
13/03/12 23:44:39 INFO mapred.JobClient: Running job: job_201303122326_0001
13/03/12 23:44:40 INFO mapred.JobClient:  map 0% reduce 0%
13/03/12 23:44:55 INFO mapred.JobClient:  map 100% reduce 0%
13/03/12 23:45:10 INFO mapred.JobClient:  map 100% reduce 100%
13/03/12 23:45:15 INFO mapred.JobClient: Job complete: job_201303122326_0001
13/03/12 23:45:15 INFO mapred.JobClient: Counters: 29
13/03/12 23:45:15 INFO mapred.JobClient:   Job Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Launched reduce tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=15198
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Launched map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     Data-local map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=14400
13/03/12 23:45:15 INFO mapred.JobClient:   File Output Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Written=17
13/03/12 23:45:15 INFO mapred.JobClient:   FileSystemCounters
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_READ=61
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_READ=634
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=42869
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=17
13/03/12 23:45:15 INFO mapred.JobClient:   File Input Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Read=529
13/03/12 23:45:15 INFO mapred.JobClient:   Map-Reduce Framework
13/03/12 23:45:15 INFO mapred.JobClient:     Map output materialized
bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Map input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce shuffle bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Spilled Records=10
13/03/12 23:45:15 INFO mapred.JobClient:     Map output bytes=45
13/03/12 23:45:15 INFO mapred.JobClient:     Total committed heap usage
(bytes)=204341248
13/03/12 23:45:15 INFO mapred.JobClient:     CPU time spent (ms)=4210
13/03/12 23:45:15 INFO mapred.JobClient:     Combine input records=0
13/03/12 23:45:15 INFO mapred.JobClient:     SPLIT_RAW_BYTES=105
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input groups=2
13/03/12 23:45:15 INFO mapred.JobClient:     Combine output records=0
13/03/12 23:45:15 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=275357696
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce output records=2
13/03/12 23:45:15 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=2928607232
13/03/12 23:45:15 INFO mapred.JobClient:     Map output records=5



On Tue, Mar 12, 2013 at 10:57 PM, mallik arjun <ma...@gmail.com>wrote:

> mapred-site.xml
>
> <configuration>
> <property>
>
>     <name>mapred.job.tracker</name>
>
>     <value>localhost:54311</value>
>
>   </property>
> </configuration>
>
> core-site.xml
> <configuration>
>
> <property>
>
>     <name>hadoop.tmp.dir</name>
>
>     <value>/home/mallik/hadoop-${user.name}</value>
>
>     <description>A base for other temporary directories.</description>
>
>   </property>
>
> <property>
>
>  <name>fs.default.name</name>
>
> <value>hdfs://localhost:54310</value>
>
> </property>
> </configuration>
>
>
> hdfs-site.xml
> <configuration>
> <property>
>
>   <name>dfs.replication</name>
>
>   <value>1</value>
>
>   </property>
>
>
>
> </configuration>
>
>
> On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:
>
>> share your files in hadoop home folder
>>
>> hadoop-1.0.3/conf/mapred-site.xml
>> hadoop-1.0.3/conf/core-site.xml
>> hadoop-1.0.3/conf/hdfs-site.xml
>>
>>
>> and also run "jps" command to which processes are running
>>
>>
>>
>> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Hi,
>>>
>>> This line in your exception message:
>>> "Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer"
>>>
>>> indicates that the client is trying to submit a job on the IPC port of
>>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>>> mapred.job.tracker (most likely in your mapred-site.xml)
>>>
>>>
>>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have not configured, can u tell me how to configure
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>>> yhemanth@thoughtworks.com> wrote:
>>>>
>>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>>> configuration may be helpful.
>>>>>
>>>>> Thanks
>>>>> Hemanth
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> i have seen the logs and the reason for the error is
>>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>>> java.io.IOException: Connection reset by peer
>>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>>> Connection reset by peer
>>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>>> Source)
>>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>>> at
>>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>> at
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>>  at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> both name node and job tracker are working well
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>>
>>>>>>>> What is coming on
>>>>>>>>
>>>>>>>> localhost:50070
>>>>>>>> localhost:50030
>>>>>>>>
>>>>>>>> Are you able to see console pages?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi
>>>>>>>>>>
>>>>>>>>>> Are you able to run the wordcount example in
>>>>>>>>>> hadoop-*-examples.jar using this command.
>>>>>>>>>>
>>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>>
>>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>>> logs.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>>
>>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi mallik
>>>>>>>>>>>>
>>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>>
>>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>>
>>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>>
>>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>>
>>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> *
>> *
>> *
>>
>> Thanx and Regards*
>> * Vikas Jadhav*
>>
>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
finally my cluster is running well see the following things

Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ sudo nano /etc/sysctl.conf
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ hadoop dfsadmin -safemode get
Warning: $HADOOP_HOME is deprecated.

Safe mode is ON
mallik@ubuntu:~$ hadoop dfsadmin -safemode leave
Warning: $HADOOP_HOME is deprecated.

Safe mode is OFF
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$
mallik@ubuntu:~$
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

13/03/12 23:44:38 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
13/03/12 23:44:38 INFO input.FileInputFormat: Total input paths to process
: 1
13/03/12 23:44:38 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/03/12 23:44:38 WARN snappy.LoadSnappy: Snappy native library not loaded
13/03/12 23:44:39 INFO mapred.JobClient: Running job: job_201303122326_0001
13/03/12 23:44:40 INFO mapred.JobClient:  map 0% reduce 0%
13/03/12 23:44:55 INFO mapred.JobClient:  map 100% reduce 0%
13/03/12 23:45:10 INFO mapred.JobClient:  map 100% reduce 100%
13/03/12 23:45:15 INFO mapred.JobClient: Job complete: job_201303122326_0001
13/03/12 23:45:15 INFO mapred.JobClient: Counters: 29
13/03/12 23:45:15 INFO mapred.JobClient:   Job Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Launched reduce tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=15198
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Launched map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     Data-local map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=14400
13/03/12 23:45:15 INFO mapred.JobClient:   File Output Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Written=17
13/03/12 23:45:15 INFO mapred.JobClient:   FileSystemCounters
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_READ=61
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_READ=634
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=42869
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=17
13/03/12 23:45:15 INFO mapred.JobClient:   File Input Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Read=529
13/03/12 23:45:15 INFO mapred.JobClient:   Map-Reduce Framework
13/03/12 23:45:15 INFO mapred.JobClient:     Map output materialized
bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Map input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce shuffle bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Spilled Records=10
13/03/12 23:45:15 INFO mapred.JobClient:     Map output bytes=45
13/03/12 23:45:15 INFO mapred.JobClient:     Total committed heap usage
(bytes)=204341248
13/03/12 23:45:15 INFO mapred.JobClient:     CPU time spent (ms)=4210
13/03/12 23:45:15 INFO mapred.JobClient:     Combine input records=0
13/03/12 23:45:15 INFO mapred.JobClient:     SPLIT_RAW_BYTES=105
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input groups=2
13/03/12 23:45:15 INFO mapred.JobClient:     Combine output records=0
13/03/12 23:45:15 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=275357696
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce output records=2
13/03/12 23:45:15 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=2928607232
13/03/12 23:45:15 INFO mapred.JobClient:     Map output records=5



On Tue, Mar 12, 2013 at 10:57 PM, mallik arjun <ma...@gmail.com>wrote:

> mapred-site.xml
>
> <configuration>
> <property>
>
>     <name>mapred.job.tracker</name>
>
>     <value>localhost:54311</value>
>
>   </property>
> </configuration>
>
> core-site.xml
> <configuration>
>
> <property>
>
>     <name>hadoop.tmp.dir</name>
>
>     <value>/home/mallik/hadoop-${user.name}</value>
>
>     <description>A base for other temporary directories.</description>
>
>   </property>
>
> <property>
>
>  <name>fs.default.name</name>
>
> <value>hdfs://localhost:54310</value>
>
> </property>
> </configuration>
>
>
> hdfs-site.xml
> <configuration>
> <property>
>
>   <name>dfs.replication</name>
>
>   <value>1</value>
>
>   </property>
>
>
>
> </configuration>
>
>
> On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:
>
>> share your files in hadoop home folder
>>
>> hadoop-1.0.3/conf/mapred-site.xml
>> hadoop-1.0.3/conf/core-site.xml
>> hadoop-1.0.3/conf/hdfs-site.xml
>>
>>
>> and also run "jps" command to which processes are running
>>
>>
>>
>> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Hi,
>>>
>>> This line in your exception message:
>>> "Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer"
>>>
>>> indicates that the client is trying to submit a job on the IPC port of
>>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>>> mapred.job.tracker (most likely in your mapred-site.xml)
>>>
>>>
>>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have not configured, can u tell me how to configure
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>>> yhemanth@thoughtworks.com> wrote:
>>>>
>>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>>> configuration may be helpful.
>>>>>
>>>>> Thanks
>>>>> Hemanth
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> i have seen the logs and the reason for the error is
>>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>>> java.io.IOException: Connection reset by peer
>>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>>> Connection reset by peer
>>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>>> Source)
>>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>>> at
>>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>> at
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>>  at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> both name node and job tracker are working well
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>>
>>>>>>>> What is coming on
>>>>>>>>
>>>>>>>> localhost:50070
>>>>>>>> localhost:50030
>>>>>>>>
>>>>>>>> Are you able to see console pages?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi
>>>>>>>>>>
>>>>>>>>>> Are you able to run the wordcount example in
>>>>>>>>>> hadoop-*-examples.jar using this command.
>>>>>>>>>>
>>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>>
>>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>>> logs.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>>
>>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi mallik
>>>>>>>>>>>>
>>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>>
>>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>>
>>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>>
>>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>>
>>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> *
>> *
>> *
>>
>> Thanx and Regards*
>> * Vikas Jadhav*
>>
>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
finally my cluster is running well see the following things

Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ sudo nano /etc/sysctl.conf
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ hadoop dfsadmin -safemode get
Warning: $HADOOP_HOME is deprecated.

Safe mode is ON
mallik@ubuntu:~$ hadoop dfsadmin -safemode leave
Warning: $HADOOP_HOME is deprecated.

Safe mode is OFF
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$
mallik@ubuntu:~$
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

13/03/12 23:44:38 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
13/03/12 23:44:38 INFO input.FileInputFormat: Total input paths to process
: 1
13/03/12 23:44:38 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/03/12 23:44:38 WARN snappy.LoadSnappy: Snappy native library not loaded
13/03/12 23:44:39 INFO mapred.JobClient: Running job: job_201303122326_0001
13/03/12 23:44:40 INFO mapred.JobClient:  map 0% reduce 0%
13/03/12 23:44:55 INFO mapred.JobClient:  map 100% reduce 0%
13/03/12 23:45:10 INFO mapred.JobClient:  map 100% reduce 100%
13/03/12 23:45:15 INFO mapred.JobClient: Job complete: job_201303122326_0001
13/03/12 23:45:15 INFO mapred.JobClient: Counters: 29
13/03/12 23:45:15 INFO mapred.JobClient:   Job Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Launched reduce tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=15198
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Launched map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     Data-local map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=14400
13/03/12 23:45:15 INFO mapred.JobClient:   File Output Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Written=17
13/03/12 23:45:15 INFO mapred.JobClient:   FileSystemCounters
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_READ=61
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_READ=634
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=42869
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=17
13/03/12 23:45:15 INFO mapred.JobClient:   File Input Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Read=529
13/03/12 23:45:15 INFO mapred.JobClient:   Map-Reduce Framework
13/03/12 23:45:15 INFO mapred.JobClient:     Map output materialized
bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Map input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce shuffle bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Spilled Records=10
13/03/12 23:45:15 INFO mapred.JobClient:     Map output bytes=45
13/03/12 23:45:15 INFO mapred.JobClient:     Total committed heap usage
(bytes)=204341248
13/03/12 23:45:15 INFO mapred.JobClient:     CPU time spent (ms)=4210
13/03/12 23:45:15 INFO mapred.JobClient:     Combine input records=0
13/03/12 23:45:15 INFO mapred.JobClient:     SPLIT_RAW_BYTES=105
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input groups=2
13/03/12 23:45:15 INFO mapred.JobClient:     Combine output records=0
13/03/12 23:45:15 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=275357696
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce output records=2
13/03/12 23:45:15 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=2928607232
13/03/12 23:45:15 INFO mapred.JobClient:     Map output records=5



On Tue, Mar 12, 2013 at 10:57 PM, mallik arjun <ma...@gmail.com>wrote:

> mapred-site.xml
>
> <configuration>
> <property>
>
>     <name>mapred.job.tracker</name>
>
>     <value>localhost:54311</value>
>
>   </property>
> </configuration>
>
> core-site.xml
> <configuration>
>
> <property>
>
>     <name>hadoop.tmp.dir</name>
>
>     <value>/home/mallik/hadoop-${user.name}</value>
>
>     <description>A base for other temporary directories.</description>
>
>   </property>
>
> <property>
>
>  <name>fs.default.name</name>
>
> <value>hdfs://localhost:54310</value>
>
> </property>
> </configuration>
>
>
> hdfs-site.xml
> <configuration>
> <property>
>
>   <name>dfs.replication</name>
>
>   <value>1</value>
>
>   </property>
>
>
>
> </configuration>
>
>
> On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:
>
>> share your files in hadoop home folder
>>
>> hadoop-1.0.3/conf/mapred-site.xml
>> hadoop-1.0.3/conf/core-site.xml
>> hadoop-1.0.3/conf/hdfs-site.xml
>>
>>
>> and also run "jps" command to which processes are running
>>
>>
>>
>> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Hi,
>>>
>>> This line in your exception message:
>>> "Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer"
>>>
>>> indicates that the client is trying to submit a job on the IPC port of
>>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>>> mapred.job.tracker (most likely in your mapred-site.xml)
>>>
>>>
>>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have not configured, can u tell me how to configure
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>>> yhemanth@thoughtworks.com> wrote:
>>>>
>>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>>> configuration may be helpful.
>>>>>
>>>>> Thanks
>>>>> Hemanth
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> i have seen the logs and the reason for the error is
>>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>>> java.io.IOException: Connection reset by peer
>>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>>> Connection reset by peer
>>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>>> Source)
>>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>>> at
>>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>> at
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>>  at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> both name node and job tracker are working well
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>>
>>>>>>>> What is coming on
>>>>>>>>
>>>>>>>> localhost:50070
>>>>>>>> localhost:50030
>>>>>>>>
>>>>>>>> Are you able to see console pages?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi
>>>>>>>>>>
>>>>>>>>>> Are you able to run the wordcount example in
>>>>>>>>>> hadoop-*-examples.jar using this command.
>>>>>>>>>>
>>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>>
>>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>>> logs.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>>
>>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi mallik
>>>>>>>>>>>>
>>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>>
>>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>>
>>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>>
>>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>>
>>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> *
>> *
>> *
>>
>> Thanx and Regards*
>> * Vikas Jadhav*
>>
>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
finally my cluster is running well see the following things

Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ sudo nano /etc/sysctl.conf
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ hadoop dfsadmin -safemode get
Warning: $HADOOP_HOME is deprecated.

Safe mode is ON
mallik@ubuntu:~$ hadoop dfsadmin -safemode leave
Warning: $HADOOP_HOME is deprecated.

Safe mode is OFF
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
mallik@ubuntu:~$
mallik@ubuntu:~$
mallik@ubuntu:~$ hadoop jar /home/mallik/definite/MaxTemperature.jar
input  outputmap
Warning: $HADOOP_HOME is deprecated.

13/03/12 23:44:38 WARN mapred.JobClient: Use GenericOptionsParser for
parsing the arguments. Applications should implement Tool for the same.
13/03/12 23:44:38 INFO input.FileInputFormat: Total input paths to process
: 1
13/03/12 23:44:38 INFO util.NativeCodeLoader: Loaded the native-hadoop
library
13/03/12 23:44:38 WARN snappy.LoadSnappy: Snappy native library not loaded
13/03/12 23:44:39 INFO mapred.JobClient: Running job: job_201303122326_0001
13/03/12 23:44:40 INFO mapred.JobClient:  map 0% reduce 0%
13/03/12 23:44:55 INFO mapred.JobClient:  map 100% reduce 0%
13/03/12 23:45:10 INFO mapred.JobClient:  map 100% reduce 100%
13/03/12 23:45:15 INFO mapred.JobClient: Job complete: job_201303122326_0001
13/03/12 23:45:15 INFO mapred.JobClient: Counters: 29
13/03/12 23:45:15 INFO mapred.JobClient:   Job Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Launched reduce tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=15198
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
13/03/12 23:45:15 INFO mapred.JobClient:     Launched map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     Data-local map tasks=1
13/03/12 23:45:15 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=14400
13/03/12 23:45:15 INFO mapred.JobClient:   File Output Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Written=17
13/03/12 23:45:15 INFO mapred.JobClient:   FileSystemCounters
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_READ=61
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_READ=634
13/03/12 23:45:15 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=42869
13/03/12 23:45:15 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=17
13/03/12 23:45:15 INFO mapred.JobClient:   File Input Format Counters
13/03/12 23:45:15 INFO mapred.JobClient:     Bytes Read=529
13/03/12 23:45:15 INFO mapred.JobClient:   Map-Reduce Framework
13/03/12 23:45:15 INFO mapred.JobClient:     Map output materialized
bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Map input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce shuffle bytes=61
13/03/12 23:45:15 INFO mapred.JobClient:     Spilled Records=10
13/03/12 23:45:15 INFO mapred.JobClient:     Map output bytes=45
13/03/12 23:45:15 INFO mapred.JobClient:     Total committed heap usage
(bytes)=204341248
13/03/12 23:45:15 INFO mapred.JobClient:     CPU time spent (ms)=4210
13/03/12 23:45:15 INFO mapred.JobClient:     Combine input records=0
13/03/12 23:45:15 INFO mapred.JobClient:     SPLIT_RAW_BYTES=105
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input records=5
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce input groups=2
13/03/12 23:45:15 INFO mapred.JobClient:     Combine output records=0
13/03/12 23:45:15 INFO mapred.JobClient:     Physical memory (bytes)
snapshot=275357696
13/03/12 23:45:15 INFO mapred.JobClient:     Reduce output records=2
13/03/12 23:45:15 INFO mapred.JobClient:     Virtual memory (bytes)
snapshot=2928607232
13/03/12 23:45:15 INFO mapred.JobClient:     Map output records=5



On Tue, Mar 12, 2013 at 10:57 PM, mallik arjun <ma...@gmail.com>wrote:

> mapred-site.xml
>
> <configuration>
> <property>
>
>     <name>mapred.job.tracker</name>
>
>     <value>localhost:54311</value>
>
>   </property>
> </configuration>
>
> core-site.xml
> <configuration>
>
> <property>
>
>     <name>hadoop.tmp.dir</name>
>
>     <value>/home/mallik/hadoop-${user.name}</value>
>
>     <description>A base for other temporary directories.</description>
>
>   </property>
>
> <property>
>
>  <name>fs.default.name</name>
>
> <value>hdfs://localhost:54310</value>
>
> </property>
> </configuration>
>
>
> hdfs-site.xml
> <configuration>
> <property>
>
>   <name>dfs.replication</name>
>
>   <value>1</value>
>
>   </property>
>
>
>
> </configuration>
>
>
> On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:
>
>> share your files in hadoop home folder
>>
>> hadoop-1.0.3/conf/mapred-site.xml
>> hadoop-1.0.3/conf/core-site.xml
>> hadoop-1.0.3/conf/hdfs-site.xml
>>
>>
>> and also run "jps" command to which processes are running
>>
>>
>>
>> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Hi,
>>>
>>> This line in your exception message:
>>> "Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer"
>>>
>>> indicates that the client is trying to submit a job on the IPC port of
>>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>>> mapred.job.tracker (most likely in your mapred-site.xml)
>>>
>>>
>>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have not configured, can u tell me how to configure
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>>> yhemanth@thoughtworks.com> wrote:
>>>>
>>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>>> configuration may be helpful.
>>>>>
>>>>> Thanks
>>>>> Hemanth
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> i have seen the logs and the reason for the error is
>>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>>> java.io.IOException: Connection reset by peer
>>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>>> Connection reset by peer
>>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>>> Source)
>>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>>> at
>>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>>> at
>>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>> at
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>>  at
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>>  at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>>> at
>>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>>> at
>>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> both name node and job tracker are working well
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>>
>>>>>>>> What is coming on
>>>>>>>>
>>>>>>>> localhost:50070
>>>>>>>> localhost:50030
>>>>>>>>
>>>>>>>> Are you able to see console pages?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi
>>>>>>>>>>
>>>>>>>>>> Are you able to run the wordcount example in
>>>>>>>>>> hadoop-*-examples.jar using this command.
>>>>>>>>>>
>>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>>
>>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>>> logs.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>>
>>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi mallik
>>>>>>>>>>>>
>>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>>
>>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>>
>>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>>
>>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>>
>>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> *
>> *
>> *
>>
>> Thanx and Regards*
>> * Vikas Jadhav*
>>
>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
mapred-site.xml

<configuration>
<property>

    <name>mapred.job.tracker</name>

    <value>localhost:54311</value>

  </property>
</configuration>

core-site.xml
<configuration>

<property>

    <name>hadoop.tmp.dir</name>

    <value>/home/mallik/hadoop-${user.name}</value>

    <description>A base for other temporary directories.</description>

  </property>

<property>

 <name>fs.default.name</name>

<value>hdfs://localhost:54310</value>

</property>
</configuration>


hdfs-site.xml
<configuration>
<property>

  <name>dfs.replication</name>

  <value>1</value>

  </property>



</configuration>


On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:

> share your files in hadoop home folder
>
> hadoop-1.0.3/conf/mapred-site.xml
> hadoop-1.0.3/conf/core-site.xml
> hadoop-1.0.3/conf/hdfs-site.xml
>
>
> and also run "jps" command to which processes are running
>
>
>
> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Hi,
>>
>> This line in your exception message:
>> "Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer"
>>
>> indicates that the client is trying to submit a job on the IPC port of
>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>> mapred.job.tracker (most likely in your mapred-site.xml)
>>
>>
>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have not configured, can u tell me how to configure
>>>
>>>
>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>> yhemanth@thoughtworks.com> wrote:
>>>
>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>> configuration may be helpful.
>>>>
>>>> Thanks
>>>> Hemanth
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i have seen the logs and the reason for the error is
>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>> java.io.IOException: Connection reset by peer
>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>> Connection reset by peer
>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>> Source)
>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>> at
>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>> at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>  at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>> at
>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>> at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> both name node and job tracker are working well
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>
>>>>>>> What is coming on
>>>>>>>
>>>>>>> localhost:50070
>>>>>>> localhost:50030
>>>>>>>
>>>>>>> Are you able to see console pages?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>>> using this command.
>>>>>>>>>
>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>
>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>> logs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>
>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi mallik
>>>>>>>>>>>
>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>
>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>
>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>
>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>
>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> *
> *
> *
>
> Thanx and Regards*
> * Vikas Jadhav*
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
mapred-site.xml

<configuration>
<property>

    <name>mapred.job.tracker</name>

    <value>localhost:54311</value>

  </property>
</configuration>

core-site.xml
<configuration>

<property>

    <name>hadoop.tmp.dir</name>

    <value>/home/mallik/hadoop-${user.name}</value>

    <description>A base for other temporary directories.</description>

  </property>

<property>

 <name>fs.default.name</name>

<value>hdfs://localhost:54310</value>

</property>
</configuration>


hdfs-site.xml
<configuration>
<property>

  <name>dfs.replication</name>

  <value>1</value>

  </property>



</configuration>


On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:

> share your files in hadoop home folder
>
> hadoop-1.0.3/conf/mapred-site.xml
> hadoop-1.0.3/conf/core-site.xml
> hadoop-1.0.3/conf/hdfs-site.xml
>
>
> and also run "jps" command to which processes are running
>
>
>
> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Hi,
>>
>> This line in your exception message:
>> "Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer"
>>
>> indicates that the client is trying to submit a job on the IPC port of
>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>> mapred.job.tracker (most likely in your mapred-site.xml)
>>
>>
>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have not configured, can u tell me how to configure
>>>
>>>
>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>> yhemanth@thoughtworks.com> wrote:
>>>
>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>> configuration may be helpful.
>>>>
>>>> Thanks
>>>> Hemanth
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i have seen the logs and the reason for the error is
>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>> java.io.IOException: Connection reset by peer
>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>> Connection reset by peer
>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>> Source)
>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>> at
>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>> at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>  at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>> at
>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>> at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> both name node and job tracker are working well
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>
>>>>>>> What is coming on
>>>>>>>
>>>>>>> localhost:50070
>>>>>>> localhost:50030
>>>>>>>
>>>>>>> Are you able to see console pages?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>>> using this command.
>>>>>>>>>
>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>
>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>> logs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>
>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi mallik
>>>>>>>>>>>
>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>
>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>
>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>
>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>
>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> *
> *
> *
>
> Thanx and Regards*
> * Vikas Jadhav*
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
mapred-site.xml

<configuration>
<property>

    <name>mapred.job.tracker</name>

    <value>localhost:54311</value>

  </property>
</configuration>

core-site.xml
<configuration>

<property>

    <name>hadoop.tmp.dir</name>

    <value>/home/mallik/hadoop-${user.name}</value>

    <description>A base for other temporary directories.</description>

  </property>

<property>

 <name>fs.default.name</name>

<value>hdfs://localhost:54310</value>

</property>
</configuration>


hdfs-site.xml
<configuration>
<property>

  <name>dfs.replication</name>

  <value>1</value>

  </property>



</configuration>


On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:

> share your files in hadoop home folder
>
> hadoop-1.0.3/conf/mapred-site.xml
> hadoop-1.0.3/conf/core-site.xml
> hadoop-1.0.3/conf/hdfs-site.xml
>
>
> and also run "jps" command to which processes are running
>
>
>
> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Hi,
>>
>> This line in your exception message:
>> "Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer"
>>
>> indicates that the client is trying to submit a job on the IPC port of
>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>> mapred.job.tracker (most likely in your mapred-site.xml)
>>
>>
>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have not configured, can u tell me how to configure
>>>
>>>
>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>> yhemanth@thoughtworks.com> wrote:
>>>
>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>> configuration may be helpful.
>>>>
>>>> Thanks
>>>> Hemanth
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i have seen the logs and the reason for the error is
>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>> java.io.IOException: Connection reset by peer
>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>> Connection reset by peer
>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>> Source)
>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>> at
>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>> at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>  at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>> at
>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>> at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> both name node and job tracker are working well
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>
>>>>>>> What is coming on
>>>>>>>
>>>>>>> localhost:50070
>>>>>>> localhost:50030
>>>>>>>
>>>>>>> Are you able to see console pages?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>>> using this command.
>>>>>>>>>
>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>
>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>> logs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>
>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi mallik
>>>>>>>>>>>
>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>
>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>
>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>
>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>
>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> *
> *
> *
>
> Thanx and Regards*
> * Vikas Jadhav*
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
mapred-site.xml

<configuration>
<property>

    <name>mapred.job.tracker</name>

    <value>localhost:54311</value>

  </property>
</configuration>

core-site.xml
<configuration>

<property>

    <name>hadoop.tmp.dir</name>

    <value>/home/mallik/hadoop-${user.name}</value>

    <description>A base for other temporary directories.</description>

  </property>

<property>

 <name>fs.default.name</name>

<value>hdfs://localhost:54310</value>

</property>
</configuration>


hdfs-site.xml
<configuration>
<property>

  <name>dfs.replication</name>

  <value>1</value>

  </property>



</configuration>


On Tue, Mar 12, 2013 at 4:53 PM, Vikas Jadhav <vi...@gmail.com>wrote:

> share your files in hadoop home folder
>
> hadoop-1.0.3/conf/mapred-site.xml
> hadoop-1.0.3/conf/core-site.xml
> hadoop-1.0.3/conf/hdfs-site.xml
>
>
> and also run "jps" command to which processes are running
>
>
>
> On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Hi,
>>
>> This line in your exception message:
>> "Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer"
>>
>> indicates that the client is trying to submit a job on the IPC port of
>> the jobtracker at 127.0.0.1:54311. Can you tell what is configured for
>> mapred.job.tracker (most likely in your mapred-site.xml)
>>
>>
>> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have not configured, can u tell me how to configure
>>>
>>>
>>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>>> yhemanth@thoughtworks.com> wrote:
>>>
>>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>>> configuration may be helpful.
>>>>
>>>> Thanks
>>>> Hemanth
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i have seen the logs and the reason for the error is
>>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>>> java.io.IOException: Connection reset by peer
>>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>>> Connection reset by peer
>>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown
>>>>> Source)
>>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>>> at
>>>>> org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>> at
>>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>> at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>  at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>> Caused by: java.io.IOException: Connection reset by peer
>>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>> at
>>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>>  at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>> at
>>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>>> at
>>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> both name node and job tracker are working well
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>>
>>>>>>> What is coming on
>>>>>>>
>>>>>>> localhost:50070
>>>>>>> localhost:50030
>>>>>>>
>>>>>>> Are you able to see console pages?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> i am not able to run that command and logs are empty
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>>> using this command.
>>>>>>>>>
>>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>>
>>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>>> logs.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>>
>>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi mallik
>>>>>>>>>>>
>>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>>
>>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>>
>>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>>
>>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>>
>>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> --
> *
> *
> *
>
> Thanx and Regards*
> * Vikas Jadhav*
>

Re: hadoop cluster not working

Posted by Vikas Jadhav <vi...@gmail.com>.
share your files in hadoop home folder

hadoop-1.0.3/conf/mapred-site.xml
hadoop-1.0.3/conf/core-site.xml
hadoop-1.0.3/conf/hdfs-site.xml


and also run "jps" command to which processes are running



On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Hi,
>
> This line in your exception message:
> "Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer"
>
> indicates that the client is trying to submit a job on the IPC port of the
> jobtracker at 127.0.0.1:54311. Can you tell what is configured for
> mapred.job.tracker (most likely in your mapred-site.xml)
>
>
> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have not configured, can u tell me how to configure
>>
>>
>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>> configuration may be helpful.
>>>
>>> Thanks
>>> Hemanth
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have seen the logs and the reason for the error is
>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>> java.io.IOException: Connection reset by peer
>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>> Connection reset by peer
>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>> Caused by: java.io.IOException: Connection reset by peer
>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>> at
>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>> at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> both name node and job tracker are working well
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>
>>>>>> What is coming on
>>>>>>
>>>>>> localhost:50070
>>>>>> localhost:50030
>>>>>>
>>>>>> Are you able to see console pages?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <mallik.cloud@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> i am not able to run that command and logs are empty
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>> using this command.
>>>>>>>>
>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>
>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>> logs.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>
>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi mallik
>>>>>>>>>>
>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>
>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>
>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>
>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>
>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
*
*
*

Thanx and Regards*
* Vikas Jadhav*

Re: hadoop cluster not working

Posted by Vikas Jadhav <vi...@gmail.com>.
share your files in hadoop home folder

hadoop-1.0.3/conf/mapred-site.xml
hadoop-1.0.3/conf/core-site.xml
hadoop-1.0.3/conf/hdfs-site.xml


and also run "jps" command to which processes are running



On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Hi,
>
> This line in your exception message:
> "Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer"
>
> indicates that the client is trying to submit a job on the IPC port of the
> jobtracker at 127.0.0.1:54311. Can you tell what is configured for
> mapred.job.tracker (most likely in your mapred-site.xml)
>
>
> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have not configured, can u tell me how to configure
>>
>>
>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>> configuration may be helpful.
>>>
>>> Thanks
>>> Hemanth
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have seen the logs and the reason for the error is
>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>> java.io.IOException: Connection reset by peer
>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>> Connection reset by peer
>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>> Caused by: java.io.IOException: Connection reset by peer
>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>> at
>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>> at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> both name node and job tracker are working well
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>
>>>>>> What is coming on
>>>>>>
>>>>>> localhost:50070
>>>>>> localhost:50030
>>>>>>
>>>>>> Are you able to see console pages?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <mallik.cloud@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> i am not able to run that command and logs are empty
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>> using this command.
>>>>>>>>
>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>
>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>> logs.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>
>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi mallik
>>>>>>>>>>
>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>
>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>
>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>
>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>
>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
*
*
*

Thanx and Regards*
* Vikas Jadhav*

Re: hadoop cluster not working

Posted by Vikas Jadhav <vi...@gmail.com>.
share your files in hadoop home folder

hadoop-1.0.3/conf/mapred-site.xml
hadoop-1.0.3/conf/core-site.xml
hadoop-1.0.3/conf/hdfs-site.xml


and also run "jps" command to which processes are running



On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Hi,
>
> This line in your exception message:
> "Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer"
>
> indicates that the client is trying to submit a job on the IPC port of the
> jobtracker at 127.0.0.1:54311. Can you tell what is configured for
> mapred.job.tracker (most likely in your mapred-site.xml)
>
>
> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have not configured, can u tell me how to configure
>>
>>
>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>> configuration may be helpful.
>>>
>>> Thanks
>>> Hemanth
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have seen the logs and the reason for the error is
>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>> java.io.IOException: Connection reset by peer
>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>> Connection reset by peer
>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>> Caused by: java.io.IOException: Connection reset by peer
>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>> at
>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>> at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> both name node and job tracker are working well
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>
>>>>>> What is coming on
>>>>>>
>>>>>> localhost:50070
>>>>>> localhost:50030
>>>>>>
>>>>>> Are you able to see console pages?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <mallik.cloud@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> i am not able to run that command and logs are empty
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>> using this command.
>>>>>>>>
>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>
>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>> logs.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>
>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi mallik
>>>>>>>>>>
>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>
>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>
>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>
>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>
>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
*
*
*

Thanx and Regards*
* Vikas Jadhav*

Re: hadoop cluster not working

Posted by Vikas Jadhav <vi...@gmail.com>.
share your files in hadoop home folder

hadoop-1.0.3/conf/mapred-site.xml
hadoop-1.0.3/conf/core-site.xml
hadoop-1.0.3/conf/hdfs-site.xml


and also run "jps" command to which processes are running



On Tue, Mar 12, 2013 at 4:44 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Hi,
>
> This line in your exception message:
> "Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer"
>
> indicates that the client is trying to submit a job on the IPC port of the
> jobtracker at 127.0.0.1:54311. Can you tell what is configured for
> mapred.job.tracker (most likely in your mapred-site.xml)
>
>
> On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have not configured, can u tell me how to configure
>>
>>
>> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
>> yhemanth@thoughtworks.com> wrote:
>>
>>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>>> configuration may be helpful.
>>>
>>> Thanks
>>> Hemanth
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i have seen the logs and the reason for the error is
>>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>>> localhost/127.0.0.1:54311 failed on local exception:
>>>> java.io.IOException: Connection reset by peer
>>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>>> Connection reset by peer
>>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>>> at java.security.AccessController.doPrivileged(Native Method)
>>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>  at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>> Caused by: java.io.IOException: Connection reset by peer
>>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>> at
>>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>  at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>> at
>>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>>> at
>>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> both name node and job tracker are working well
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>>
>>>>>> What is coming on
>>>>>>
>>>>>> localhost:50070
>>>>>> localhost:50030
>>>>>>
>>>>>> Are you able to see console pages?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <mallik.cloud@gmail.com
>>>>>> > wrote:
>>>>>>
>>>>>>> i am not able to run that command and logs are empty
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>>> using this command.
>>>>>>>>
>>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>>
>>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>>> logs.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command
>>>>>>>>> of any >hadoop jar  xxx.jar  input output
>>>>>>>>>
>>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>> Hi mallik
>>>>>>>>>>
>>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>>
>>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>>
>>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>>  input  outputmap
>>>>>>>>>>>
>>>>>>>>>>> the cluster is not processing the job, what might be the
>>>>>>>>>>> problem, please help me, when i see logs,nothing in the logs. please help
>>>>>>>>>>> me it is very urget.
>>>>>>>>>>>
>>>>>>>>>>> thanks in advance.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
*
*
*

Thanx and Regards*
* Vikas Jadhav*

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

This line in your exception message:
"Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer"

indicates that the client is trying to submit a job on the IPC port of the
jobtracker at 127.0.0.1:54311. Can you tell what is configured for
mapred.job.tracker (most likely in your mapred-site.xml)


On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:

> i have not configured, can u tell me how to configure
>
>
> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>> configuration may be helpful.
>>
>> Thanks
>> Hemanth
>>
>>
>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have seen the logs and the reason for the error is
>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>> localhost/127.0.0.1:54311 failed on local exception:
>>> java.io.IOException: Connection reset by peer
>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer
>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Caused by: java.io.IOException: Connection reset by peer
>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>> at
>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>> at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> both name node and job tracker are working well
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>
>>>>> What is coming on
>>>>>
>>>>> localhost:50070
>>>>> localhost:50030
>>>>>
>>>>> Are you able to see console pages?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>>
>>>>>> i am not able to run that command and logs are empty
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>> using this command.
>>>>>>>
>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>
>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>> logs.
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>>
>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi mallik
>>>>>>>>>
>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>
>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>
>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>  input  outputmap
>>>>>>>>>>
>>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>>> very urget.
>>>>>>>>>>
>>>>>>>>>> thanks in advance.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

This line in your exception message:
"Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer"

indicates that the client is trying to submit a job on the IPC port of the
jobtracker at 127.0.0.1:54311. Can you tell what is configured for
mapred.job.tracker (most likely in your mapred-site.xml)


On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:

> i have not configured, can u tell me how to configure
>
>
> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>> configuration may be helpful.
>>
>> Thanks
>> Hemanth
>>
>>
>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have seen the logs and the reason for the error is
>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>> localhost/127.0.0.1:54311 failed on local exception:
>>> java.io.IOException: Connection reset by peer
>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer
>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Caused by: java.io.IOException: Connection reset by peer
>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>> at
>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>> at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> both name node and job tracker are working well
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>
>>>>> What is coming on
>>>>>
>>>>> localhost:50070
>>>>> localhost:50030
>>>>>
>>>>> Are you able to see console pages?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>>
>>>>>> i am not able to run that command and logs are empty
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>> using this command.
>>>>>>>
>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>
>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>> logs.
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>>
>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi mallik
>>>>>>>>>
>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>
>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>
>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>  input  outputmap
>>>>>>>>>>
>>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>>> very urget.
>>>>>>>>>>
>>>>>>>>>> thanks in advance.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

This line in your exception message:
"Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer"

indicates that the client is trying to submit a job on the IPC port of the
jobtracker at 127.0.0.1:54311. Can you tell what is configured for
mapred.job.tracker (most likely in your mapred-site.xml)


On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:

> i have not configured, can u tell me how to configure
>
>
> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>> configuration may be helpful.
>>
>> Thanks
>> Hemanth
>>
>>
>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have seen the logs and the reason for the error is
>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>> localhost/127.0.0.1:54311 failed on local exception:
>>> java.io.IOException: Connection reset by peer
>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer
>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Caused by: java.io.IOException: Connection reset by peer
>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>> at
>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>> at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> both name node and job tracker are working well
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>
>>>>> What is coming on
>>>>>
>>>>> localhost:50070
>>>>> localhost:50030
>>>>>
>>>>> Are you able to see console pages?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>>
>>>>>> i am not able to run that command and logs are empty
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>> using this command.
>>>>>>>
>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>
>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>> logs.
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>>
>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi mallik
>>>>>>>>>
>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>
>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>
>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>  input  outputmap
>>>>>>>>>>
>>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>>> very urget.
>>>>>>>>>>
>>>>>>>>>> thanks in advance.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi,

This line in your exception message:
"Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer"

indicates that the client is trying to submit a job on the IPC port of the
jobtracker at 127.0.0.1:54311. Can you tell what is configured for
mapred.job.tracker (most likely in your mapred-site.xml)


On Tue, Mar 12, 2013 at 7:37 AM, mallik arjun <ma...@gmail.com>wrote:

> i have not configured, can u tell me how to configure
>
>
> On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <
> yhemanth@thoughtworks.com> wrote:
>
>> Have you configured your JobTracker's IPC port as 54311. Sharing your
>> configuration may be helpful.
>>
>> Thanks
>> Hemanth
>>
>>
>> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i have seen the logs and the reason for the error is
>>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>>> localhost/127.0.0.1:54311 failed on local exception:
>>> java.io.IOException: Connection reset by peer
>>> Exception in thread "main" java.io.IOException: Call to localhost/
>>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>>> Connection reset by peer
>>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>>> at MaxTemperature.main(MaxTemperature.java:31)
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:601)
>>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Caused by: java.io.IOException: Connection reset by peer
>>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>> at
>>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>  at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>> at
>>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> both name node and job tracker are working well
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>>
>>>>> What is coming on
>>>>>
>>>>> localhost:50070
>>>>> localhost:50030
>>>>>
>>>>> Are you able to see console pages?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>>
>>>>>> i am not able to run that command and logs are empty
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>>> using this command.
>>>>>>>
>>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>>
>>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>>> logs.
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>>
>>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> Hi mallik
>>>>>>>>>
>>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>>
>>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>>
>>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well
>>>>>>>>>> before. even now  if use >hadoop fs -ls these commands well but when i use
>>>>>>>>>> the commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar
>>>>>>>>>>  input  outputmap
>>>>>>>>>>
>>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>>> very urget.
>>>>>>>>>>
>>>>>>>>>> thanks in advance.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have not configured, can u tell me how to configure


On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Have you configured your JobTracker's IPC port as 54311. Sharing your
> configuration may be helpful.
>
> Thanks
> Hemanth
>
>
> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have seen the logs and the reason for the error is
>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>> localhost/127.0.0.1:54311 failed on local exception:
>> java.io.IOException: Connection reset by peer
>> Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer
>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>> at MaxTemperature.main(MaxTemperature.java:31)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:601)
>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: java.io.IOException: Connection reset by peer
>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>  at
>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>> at
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>  at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>> at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>> at
>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>> at
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>
>>
>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> both name node and job tracker are working well
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>
>>>> What is coming on
>>>>
>>>> localhost:50070
>>>> localhost:50030
>>>>
>>>> Are you able to see console pages?
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i am not able to run that command and logs are empty
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>> using this command.
>>>>>>
>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>
>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>> logs.
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>
>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi mallik
>>>>>>>>
>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>
>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>
>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>>  outputmap
>>>>>>>>>
>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>> very urget.
>>>>>>>>>
>>>>>>>>> thanks in advance.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have not configured, can u tell me how to configure


On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Have you configured your JobTracker's IPC port as 54311. Sharing your
> configuration may be helpful.
>
> Thanks
> Hemanth
>
>
> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have seen the logs and the reason for the error is
>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>> localhost/127.0.0.1:54311 failed on local exception:
>> java.io.IOException: Connection reset by peer
>> Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer
>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>> at MaxTemperature.main(MaxTemperature.java:31)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:601)
>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: java.io.IOException: Connection reset by peer
>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>  at
>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>> at
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>  at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>> at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>> at
>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>> at
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>
>>
>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> both name node and job tracker are working well
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>
>>>> What is coming on
>>>>
>>>> localhost:50070
>>>> localhost:50030
>>>>
>>>> Are you able to see console pages?
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i am not able to run that command and logs are empty
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>> using this command.
>>>>>>
>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>
>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>> logs.
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>
>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi mallik
>>>>>>>>
>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>
>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>
>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>>  outputmap
>>>>>>>>>
>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>> very urget.
>>>>>>>>>
>>>>>>>>> thanks in advance.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have not configured, can u tell me how to configure


On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Have you configured your JobTracker's IPC port as 54311. Sharing your
> configuration may be helpful.
>
> Thanks
> Hemanth
>
>
> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have seen the logs and the reason for the error is
>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>> localhost/127.0.0.1:54311 failed on local exception:
>> java.io.IOException: Connection reset by peer
>> Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer
>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>> at MaxTemperature.main(MaxTemperature.java:31)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:601)
>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: java.io.IOException: Connection reset by peer
>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>  at
>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>> at
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>  at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>> at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>> at
>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>> at
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>
>>
>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> both name node and job tracker are working well
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>
>>>> What is coming on
>>>>
>>>> localhost:50070
>>>> localhost:50030
>>>>
>>>> Are you able to see console pages?
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i am not able to run that command and logs are empty
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>> using this command.
>>>>>>
>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>
>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>> logs.
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>
>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi mallik
>>>>>>>>
>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>
>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>
>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>>  outputmap
>>>>>>>>>
>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>> very urget.
>>>>>>>>>
>>>>>>>>> thanks in advance.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have not configured, can u tell me how to configure


On Sun, Mar 10, 2013 at 7:31 PM, Hemanth Yamijala <yhemanth@thoughtworks.com
> wrote:

> Have you configured your JobTracker's IPC port as 54311. Sharing your
> configuration may be helpful.
>
> Thanks
> Hemanth
>
>
> On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> i have seen the logs and the reason for the error is
>> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
>> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
>> localhost/127.0.0.1:54311 failed on local exception:
>> java.io.IOException: Connection reset by peer
>> Exception in thread "main" java.io.IOException: Call to localhost/
>> 127.0.0.1:54311 failed on local exception: java.io.IOException:
>> Connection reset by peer
>>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
>> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
>> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
>> at java.security.AccessController.doPrivileged(Native Method)
>>  at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
>> at MaxTemperature.main(MaxTemperature.java:31)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:601)
>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: java.io.IOException: Connection reset by peer
>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>>  at
>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>> at
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>  at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>> at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>> at
>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>> at
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>>
>>
>> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> both name node and job tracker are working well
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>>
>>>> What is coming on
>>>>
>>>> localhost:50070
>>>> localhost:50030
>>>>
>>>> Are you able to see console pages?
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> i am not able to run that command and logs are empty
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>>> using this command.
>>>>>>
>>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>>
>>>>>> check your JobTracker and TaskTracker is start correctly. see the
>>>>>> logs.
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>>
>>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi mallik
>>>>>>>>
>>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>>
>>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>>
>>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>>  outputmap
>>>>>>>>>
>>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>>> very urget.
>>>>>>>>>
>>>>>>>>> thanks in advance.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Have you configured your JobTracker's IPC port as 54311. Sharing your
configuration may be helpful.

Thanks
Hemanth


On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:

> i have seen the logs and the reason for the error is
> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
> localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
> at MaxTemperature.main(MaxTemperature.java:31)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>  at
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>  at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>
>
> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> both name node and job tracker are working well
>>
>>
>>
>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>
>>> What is coming on
>>>
>>> localhost:50070
>>> localhost:50030
>>>
>>> Are you able to see console pages?
>>>
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i am not able to run that command and logs are empty
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>> using this command.
>>>>>
>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>
>>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>
>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi mallik
>>>>>>>
>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>
>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>
>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>  outputmap
>>>>>>>>
>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>> very urget.
>>>>>>>>
>>>>>>>> thanks in advance.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Have you configured your JobTracker's IPC port as 54311. Sharing your
configuration may be helpful.

Thanks
Hemanth


On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:

> i have seen the logs and the reason for the error is
> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
> localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
> at MaxTemperature.main(MaxTemperature.java:31)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>  at
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>  at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>
>
> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> both name node and job tracker are working well
>>
>>
>>
>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>
>>> What is coming on
>>>
>>> localhost:50070
>>> localhost:50030
>>>
>>> Are you able to see console pages?
>>>
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i am not able to run that command and logs are empty
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>> using this command.
>>>>>
>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>
>>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>
>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi mallik
>>>>>>>
>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>
>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>
>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>  outputmap
>>>>>>>>
>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>> very urget.
>>>>>>>>
>>>>>>>> thanks in advance.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Have you configured your JobTracker's IPC port as 54311. Sharing your
configuration may be helpful.

Thanks
Hemanth


On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:

> i have seen the logs and the reason for the error is
> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
> localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
> at MaxTemperature.main(MaxTemperature.java:31)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>  at
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>  at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>
>
> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> both name node and job tracker are working well
>>
>>
>>
>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>
>>> What is coming on
>>>
>>> localhost:50070
>>> localhost:50030
>>>
>>> Are you able to see console pages?
>>>
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i am not able to run that command and logs are empty
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>> using this command.
>>>>>
>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>
>>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>
>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi mallik
>>>>>>>
>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>
>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>
>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>  outputmap
>>>>>>>>
>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>> very urget.
>>>>>>>>
>>>>>>>> thanks in advance.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Have you configured your JobTracker's IPC port as 54311. Sharing your
configuration may be helpful.

Thanks
Hemanth


On Sun, Mar 10, 2013 at 11:56 AM, mallik arjun <ma...@gmail.com>wrote:

> i have seen the logs and the reason for the error is
> 13/03/10 10:26:45 ERROR security.UserGroupInformation:
> PriviledgedActionException as:mallik cause:java.io.IOException: Call to
> localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> Exception in thread "main" java.io.IOException: Call to localhost/
> 127.0.0.1:54311 failed on local exception: java.io.IOException:
> Connection reset by peer
> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
>  at org.apache.hadoop.ipc.Client.call(Client.java:1075)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>  at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
> at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
>  at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
> at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
>  at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>  at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
>  at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
> at MaxTemperature.main(MaxTemperature.java:31)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>  at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
>  at sun.nio.ch.IOUtil.read(IOUtil.java:191)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
>  at
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>  at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
>  at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)
>
>
> On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> both name node and job tracker are working well
>>
>>
>>
>> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>>
>>> What is coming on
>>>
>>> localhost:50070
>>> localhost:50030
>>>
>>> Are you able to see console pages?
>>>
>>>
>>>
>>>
>>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> i am not able to run that command and logs are empty
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>>> using this command.
>>>>>
>>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>>> <#reducers>] <in-dir> <out-dir>
>>>>>
>>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>>> any >hadoop jar  xxx.jar  input output
>>>>>>
>>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com>wrote:
>>>>>>
>>>>>>> Hi mallik
>>>>>>>
>>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>>
>>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>>
>>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>>
>>>>>>>
>>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>>
>>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>>  outputmap
>>>>>>>>
>>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>>> very urget.
>>>>>>>>
>>>>>>>> thanks in advance.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have seen the logs and the reason for the error is
13/03/10 10:26:45 ERROR security.UserGroupInformation:
PriviledgedActionException as:mallik cause:java.io.IOException: Call to
localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
Connection reset by peer
Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at MaxTemperature.main(MaxTemperature.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:191)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)


On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:

> both name node and job tracker are working well
>
>
>
> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>
>> What is coming on
>>
>> localhost:50070
>> localhost:50030
>>
>> Are you able to see console pages?
>>
>>
>>
>>
>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i am not able to run that command and logs are empty
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi
>>>>
>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>> using this command.
>>>>
>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>> <#reducers>] <in-dir> <out-dir>
>>>>
>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>> any >hadoop jar  xxx.jar  input output
>>>>>
>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi mallik
>>>>>>
>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>
>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>
>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>  outputmap
>>>>>>>
>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>> very urget.
>>>>>>>
>>>>>>> thanks in advance.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have seen the logs and the reason for the error is
13/03/10 10:26:45 ERROR security.UserGroupInformation:
PriviledgedActionException as:mallik cause:java.io.IOException: Call to
localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
Connection reset by peer
Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at MaxTemperature.main(MaxTemperature.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:191)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)


On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:

> both name node and job tracker are working well
>
>
>
> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>
>> What is coming on
>>
>> localhost:50070
>> localhost:50030
>>
>> Are you able to see console pages?
>>
>>
>>
>>
>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i am not able to run that command and logs are empty
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi
>>>>
>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>> using this command.
>>>>
>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>> <#reducers>] <in-dir> <out-dir>
>>>>
>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>> any >hadoop jar  xxx.jar  input output
>>>>>
>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi mallik
>>>>>>
>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>
>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>
>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>  outputmap
>>>>>>>
>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>> very urget.
>>>>>>>
>>>>>>> thanks in advance.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have seen the logs and the reason for the error is
13/03/10 10:26:45 ERROR security.UserGroupInformation:
PriviledgedActionException as:mallik cause:java.io.IOException: Call to
localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
Connection reset by peer
Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at MaxTemperature.main(MaxTemperature.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:191)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)


On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:

> both name node and job tracker are working well
>
>
>
> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>
>> What is coming on
>>
>> localhost:50070
>> localhost:50030
>>
>> Are you able to see console pages?
>>
>>
>>
>>
>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i am not able to run that command and logs are empty
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi
>>>>
>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>> using this command.
>>>>
>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>> <#reducers>] <in-dir> <out-dir>
>>>>
>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>> any >hadoop jar  xxx.jar  input output
>>>>>
>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi mallik
>>>>>>
>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>
>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>
>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>  outputmap
>>>>>>>
>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>> very urget.
>>>>>>>
>>>>>>> thanks in advance.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i have seen the logs and the reason for the error is
13/03/10 10:26:45 ERROR security.UserGroupInformation:
PriviledgedActionException as:mallik cause:java.io.IOException: Call to
localhost/127.0.0.1:54311 failed on local exception: java.io.IOException:
Connection reset by peer
Exception in thread "main" java.io.IOException: Call to localhost/
127.0.0.1:54311 failed on local exception: java.io.IOException: Connection
reset by peer
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1107)
at org.apache.hadoop.ipc.Client.call(Client.java:1075)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:480)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:474)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:457)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
at MaxTemperature.main(MaxTemperature.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:191)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:342)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:749)


On Sun, Mar 10, 2013 at 10:33 AM, mallik arjun <ma...@gmail.com>wrote:

> both name node and job tracker are working well
>
>
>
> On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com>wrote:
>
>> What is coming on
>>
>> localhost:50070
>> localhost:50030
>>
>> Are you able to see console pages?
>>
>>
>>
>>
>> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> i am not able to run that command and logs are empty
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi
>>>>
>>>> Are you able to run the wordcount example in hadoop-*-examples.jar
>>>> using this command.
>>>>
>>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>>> <#reducers>] <in-dir> <out-dir>
>>>>
>>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> it is the not the problem of MaxTemperature.jar,even the command of
>>>>> any >hadoop jar  xxx.jar  input output
>>>>>
>>>>> when i run the command , it is like [image: Inline image 1]
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>>
>>>>>> Hi mallik
>>>>>>
>>>>>> Do you submit the job to JobTrackter? like this code
>>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>>
>>>>>> maybe you can refer to this tutorial. [0]
>>>>>>
>>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>>
>>>>>>
>>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <
>>>>>> mallik.cloud@gmail.com> wrote:
>>>>>>
>>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>>  outputmap
>>>>>>>
>>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>>> very urget.
>>>>>>>
>>>>>>> thanks in advance.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Don't Grow Old, Grow Up... :-)
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
both name node and job tracker are working well



On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com> wrote:

> What is coming on
>
> localhost:50070
> localhost:50030
>
> Are you able to see console pages?
>
>
>
>
> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>
>> i am not able to run that command and logs are empty
>>
>>
>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>>> this command.
>>>
>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>> <#reducers>] <in-dir> <out-dir>
>>>
>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>>> >hadoop jar  xxx.jar  input output
>>>>
>>>> when i run the command , it is like [image: Inline image 1]
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi mallik
>>>>>
>>>>> Do you submit the job to JobTrackter? like this code
>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>
>>>>> maybe you can refer to this tutorial. [0]
>>>>>
>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>  outputmap
>>>>>>
>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>> very urget.
>>>>>>
>>>>>> thanks in advance.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
both name node and job tracker are working well



On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com> wrote:

> What is coming on
>
> localhost:50070
> localhost:50030
>
> Are you able to see console pages?
>
>
>
>
> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>
>> i am not able to run that command and logs are empty
>>
>>
>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>>> this command.
>>>
>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>> <#reducers>] <in-dir> <out-dir>
>>>
>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>>> >hadoop jar  xxx.jar  input output
>>>>
>>>> when i run the command , it is like [image: Inline image 1]
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi mallik
>>>>>
>>>>> Do you submit the job to JobTrackter? like this code
>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>
>>>>> maybe you can refer to this tutorial. [0]
>>>>>
>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>  outputmap
>>>>>>
>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>> very urget.
>>>>>>
>>>>>> thanks in advance.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
both name node and job tracker are working well



On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com> wrote:

> What is coming on
>
> localhost:50070
> localhost:50030
>
> Are you able to see console pages?
>
>
>
>
> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>
>> i am not able to run that command and logs are empty
>>
>>
>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>>> this command.
>>>
>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>> <#reducers>] <in-dir> <out-dir>
>>>
>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>>> >hadoop jar  xxx.jar  input output
>>>>
>>>> when i run the command , it is like [image: Inline image 1]
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi mallik
>>>>>
>>>>> Do you submit the job to JobTrackter? like this code
>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>
>>>>> maybe you can refer to this tutorial. [0]
>>>>>
>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>  outputmap
>>>>>>
>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>> very urget.
>>>>>>
>>>>>> thanks in advance.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
both name node and job tracker are working well



On Sun, Mar 10, 2013 at 10:25 AM, Jagat Singh <ja...@gmail.com> wrote:

> What is coming on
>
> localhost:50070
> localhost:50030
>
> Are you able to see console pages?
>
>
>
>
> On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:
>
>> i am not able to run that command and logs are empty
>>
>>
>> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi
>>>
>>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>>> this command.
>>>
>>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>>> <#reducers>] <in-dir> <out-dir>
>>>
>>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>>
>>>
>>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>>> >hadoop jar  xxx.jar  input output
>>>>
>>>> when i run the command , it is like [image: Inline image 1]
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>>
>>>>> Hi mallik
>>>>>
>>>>> Do you submit the job to JobTrackter? like this code
>>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>>
>>>>> maybe you can refer to this tutorial. [0]
>>>>>
>>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>>
>>>>>
>>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <mallik.cloud@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>>  outputmap
>>>>>>
>>>>>> the cluster is not processing the job, what might be the problem,
>>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>>> very urget.
>>>>>>
>>>>>> thanks in advance.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Don't Grow Old, Grow Up... :-)
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>

Re: hadoop cluster not working

Posted by Jagat Singh <ja...@gmail.com>.
What is coming on

localhost:50070
localhost:50030

Are you able to see console pages?



On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:

> i am not able to run that command and logs are empty
>
>
> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi
>>
>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>> this command.
>>
>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>> <#reducers>] <in-dir> <out-dir>
>>
>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>
>>
>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>> >hadoop jar  xxx.jar  input output
>>>
>>> when i run the command , it is like [image: Inline image 1]
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi mallik
>>>>
>>>> Do you submit the job to JobTrackter? like this code
>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>
>>>> maybe you can refer to this tutorial. [0]
>>>>
>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>  outputmap
>>>>>
>>>>> the cluster is not processing the job, what might be the problem,
>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>> very urget.
>>>>>
>>>>> thanks in advance.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>

Re: hadoop cluster not working

Posted by Jagat Singh <ja...@gmail.com>.
What is coming on

localhost:50070
localhost:50030

Are you able to see console pages?



On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:

> i am not able to run that command and logs are empty
>
>
> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi
>>
>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>> this command.
>>
>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>> <#reducers>] <in-dir> <out-dir>
>>
>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>
>>
>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>> >hadoop jar  xxx.jar  input output
>>>
>>> when i run the command , it is like [image: Inline image 1]
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi mallik
>>>>
>>>> Do you submit the job to JobTrackter? like this code
>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>
>>>> maybe you can refer to this tutorial. [0]
>>>>
>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>  outputmap
>>>>>
>>>>> the cluster is not processing the job, what might be the problem,
>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>> very urget.
>>>>>
>>>>> thanks in advance.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>

Re: hadoop cluster not working

Posted by Jagat Singh <ja...@gmail.com>.
What is coming on

localhost:50070
localhost:50030

Are you able to see console pages?



On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:

> i am not able to run that command and logs are empty
>
>
> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi
>>
>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>> this command.
>>
>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>> <#reducers>] <in-dir> <out-dir>
>>
>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>
>>
>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>> >hadoop jar  xxx.jar  input output
>>>
>>> when i run the command , it is like [image: Inline image 1]
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi mallik
>>>>
>>>> Do you submit the job to JobTrackter? like this code
>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>
>>>> maybe you can refer to this tutorial. [0]
>>>>
>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>  outputmap
>>>>>
>>>>> the cluster is not processing the job, what might be the problem,
>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>> very urget.
>>>>>
>>>>> thanks in advance.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>

Re: hadoop cluster not working

Posted by Jagat Singh <ja...@gmail.com>.
What is coming on

localhost:50070
localhost:50030

Are you able to see console pages?



On Sun, Mar 10, 2013 at 3:49 PM, mallik arjun <ma...@gmail.com>wrote:

> i am not able to run that command and logs are empty
>
>
> On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi
>>
>> Are you able to run the wordcount example in hadoop-*-examples.jar using
>> this command.
>>
>> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
>> <#reducers>] <in-dir> <out-dir>
>>
>> check your JobTracker and TaskTracker is start correctly. see the logs.
>>
>>
>> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> it is the not the problem of MaxTemperature.jar,even the command of any
>>> >hadoop jar  xxx.jar  input output
>>>
>>> when i run the command , it is like [image: Inline image 1]
>>>
>>>
>>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>>
>>>> Hi mallik
>>>>
>>>> Do you submit the job to JobTrackter? like this code
>>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>>
>>>> maybe you can refer to this tutorial. [0]
>>>>
>>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>>
>>>>
>>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>>
>>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before.
>>>>> even now  if use >hadoop fs -ls these commands well but when i use the
>>>>> commands like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input
>>>>>  outputmap
>>>>>
>>>>> the cluster is not processing the job, what might be the problem,
>>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>>> very urget.
>>>>>
>>>>> thanks in advance.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Don't Grow Old, Grow Up... :-)
>>>>
>>>
>>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i am not able to run that command and logs are empty


On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:

> Hi
>
> Are you able to run the wordcount example in hadoop-*-examples.jar using
> this command.
>
> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
> <#reducers>] <in-dir> <out-dir>
>
> check your JobTracker and TaskTracker is start correctly. see the logs.
>
>
> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> it is the not the problem of MaxTemperature.jar,even the command of any
>> >hadoop jar  xxx.jar  input output
>>
>> when i run the command , it is like [image: Inline image 1]
>>
>>
>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi mallik
>>>
>>> Do you submit the job to JobTrackter? like this code
>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>
>>> maybe you can refer to this tutorial. [0]
>>>
>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>>
>>>> the cluster is not processing the job, what might be the problem,
>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>> very urget.
>>>>
>>>> thanks in advance.
>>>>
>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i am not able to run that command and logs are empty


On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:

> Hi
>
> Are you able to run the wordcount example in hadoop-*-examples.jar using
> this command.
>
> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
> <#reducers>] <in-dir> <out-dir>
>
> check your JobTracker and TaskTracker is start correctly. see the logs.
>
>
> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> it is the not the problem of MaxTemperature.jar,even the command of any
>> >hadoop jar  xxx.jar  input output
>>
>> when i run the command , it is like [image: Inline image 1]
>>
>>
>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi mallik
>>>
>>> Do you submit the job to JobTrackter? like this code
>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>
>>> maybe you can refer to this tutorial. [0]
>>>
>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>>
>>>> the cluster is not processing the job, what might be the problem,
>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>> very urget.
>>>>
>>>> thanks in advance.
>>>>
>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i am not able to run that command and logs are empty


On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:

> Hi
>
> Are you able to run the wordcount example in hadoop-*-examples.jar using
> this command.
>
> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
> <#reducers>] <in-dir> <out-dir>
>
> check your JobTracker and TaskTracker is start correctly. see the logs.
>
>
> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> it is the not the problem of MaxTemperature.jar,even the command of any
>> >hadoop jar  xxx.jar  input output
>>
>> when i run the command , it is like [image: Inline image 1]
>>
>>
>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi mallik
>>>
>>> Do you submit the job to JobTrackter? like this code
>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>
>>> maybe you can refer to this tutorial. [0]
>>>
>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>>
>>>> the cluster is not processing the job, what might be the problem,
>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>> very urget.
>>>>
>>>> thanks in advance.
>>>>
>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
i am not able to run that command and logs are empty


On Sun, Mar 10, 2013 at 8:56 AM, feng lu <am...@gmail.com> wrote:

> Hi
>
> Are you able to run the wordcount example in hadoop-*-examples.jar using
> this command.
>
> bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
> <#reducers>] <in-dir> <out-dir>
>
> check your JobTracker and TaskTracker is start correctly. see the logs.
>
>
> On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> it is the not the problem of MaxTemperature.jar,even the command of any
>> >hadoop jar  xxx.jar  input output
>>
>> when i run the command , it is like [image: Inline image 1]
>>
>>
>> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>>
>>> Hi mallik
>>>
>>> Do you submit the job to JobTrackter? like this code
>>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>>
>>> maybe you can refer to this tutorial. [0]
>>>
>>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>>
>>>
>>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>>
>>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>>
>>>> the cluster is not processing the job, what might be the problem,
>>>> please help me, when i see logs,nothing in the logs. please help me it is
>>>> very urget.
>>>>
>>>> thanks in advance.
>>>>
>>>
>>>
>>>
>>> --
>>> Don't Grow Old, Grow Up... :-)
>>>
>>
>>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi

Are you able to run the wordcount example in hadoop-*-examples.jar using
this command.

bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
<#reducers>] <in-dir> <out-dir>

check your JobTracker and TaskTracker is start correctly. see the logs.


On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:

> it is the not the problem of MaxTemperature.jar,even the command of any
> >hadoop jar  xxx.jar  input output
>
> when i run the command , it is like [image: Inline image 1]
>
>
> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi mallik
>>
>> Do you submit the job to JobTrackter? like this code
>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>
>> maybe you can refer to this tutorial. [0]
>>
>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>
>>
>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>
>>> the cluster is not processing the job, what might be the problem, please
>>> help me, when i see logs,nothing in the logs. please help me it is very
>>> urget.
>>>
>>> thanks in advance.
>>>
>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>


-- 
Don't Grow Old, Grow Up... :-)

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi

Are you able to run the wordcount example in hadoop-*-examples.jar using
this command.

bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
<#reducers>] <in-dir> <out-dir>

check your JobTracker and TaskTracker is start correctly. see the logs.


On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:

> it is the not the problem of MaxTemperature.jar,even the command of any
> >hadoop jar  xxx.jar  input output
>
> when i run the command , it is like [image: Inline image 1]
>
>
> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi mallik
>>
>> Do you submit the job to JobTrackter? like this code
>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>
>> maybe you can refer to this tutorial. [0]
>>
>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>
>>
>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>
>>> the cluster is not processing the job, what might be the problem, please
>>> help me, when i see logs,nothing in the logs. please help me it is very
>>> urget.
>>>
>>> thanks in advance.
>>>
>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>


-- 
Don't Grow Old, Grow Up... :-)

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi

Are you able to run the wordcount example in hadoop-*-examples.jar using
this command.

bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
<#reducers>] <in-dir> <out-dir>

check your JobTracker and TaskTracker is start correctly. see the logs.


On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:

> it is the not the problem of MaxTemperature.jar,even the command of any
> >hadoop jar  xxx.jar  input output
>
> when i run the command , it is like [image: Inline image 1]
>
>
> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi mallik
>>
>> Do you submit the job to JobTrackter? like this code
>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>
>> maybe you can refer to this tutorial. [0]
>>
>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>
>>
>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>
>>> the cluster is not processing the job, what might be the problem, please
>>> help me, when i see logs,nothing in the logs. please help me it is very
>>> urget.
>>>
>>> thanks in advance.
>>>
>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>


-- 
Don't Grow Old, Grow Up... :-)

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi

Are you able to run the wordcount example in hadoop-*-examples.jar using
this command.

bin/hadoop jar hadoop-*-examples.jar wordcount [-m <#maps>] [-r
<#reducers>] <in-dir> <out-dir>

check your JobTracker and TaskTracker is start correctly. see the logs.


On Sun, Mar 10, 2013 at 11:01 AM, mallik arjun <ma...@gmail.com>wrote:

> it is the not the problem of MaxTemperature.jar,even the command of any
> >hadoop jar  xxx.jar  input output
>
> when i run the command , it is like [image: Inline image 1]
>
>
> On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:
>
>> Hi mallik
>>
>> Do you submit the job to JobTrackter? like this code
>> JobClient.runJob(conf) in your MaxTemperature.jar package.
>>
>> maybe you can refer to this tutorial. [0]
>>
>> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>>
>>
>> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>>
>>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>>> now  if use >hadoop fs -ls these commands well but when i use the commands
>>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>>
>>> the cluster is not processing the job, what might be the problem, please
>>> help me, when i see logs,nothing in the logs. please help me it is very
>>> urget.
>>>
>>> thanks in advance.
>>>
>>
>>
>>
>> --
>> Don't Grow Old, Grow Up... :-)
>>
>
>


-- 
Don't Grow Old, Grow Up... :-)

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
it is the not the problem of MaxTemperature.jar,even the command of any
>hadoop jar  xxx.jar  input output

when i run the command , it is like [image: Inline image 1]


On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:

> Hi mallik
>
> Do you submit the job to JobTrackter? like this code
> JobClient.runJob(conf) in your MaxTemperature.jar package.
>
> maybe you can refer to this tutorial. [0]
>
> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>
>
> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>> now  if use >hadoop fs -ls these commands well but when i use the commands
>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>
>> the cluster is not processing the job, what might be the problem, please
>> help me, when i see logs,nothing in the logs. please help me it is very
>> urget.
>>
>> thanks in advance.
>>
>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
it is the not the problem of MaxTemperature.jar,even the command of any
>hadoop jar  xxx.jar  input output

when i run the command , it is like [image: Inline image 1]


On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:

> Hi mallik
>
> Do you submit the job to JobTrackter? like this code
> JobClient.runJob(conf) in your MaxTemperature.jar package.
>
> maybe you can refer to this tutorial. [0]
>
> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>
>
> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>> now  if use >hadoop fs -ls these commands well but when i use the commands
>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>
>> the cluster is not processing the job, what might be the problem, please
>> help me, when i see logs,nothing in the logs. please help me it is very
>> urget.
>>
>> thanks in advance.
>>
>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
it is the not the problem of MaxTemperature.jar,even the command of any
>hadoop jar  xxx.jar  input output

when i run the command , it is like [image: Inline image 1]


On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:

> Hi mallik
>
> Do you submit the job to JobTrackter? like this code
> JobClient.runJob(conf) in your MaxTemperature.jar package.
>
> maybe you can refer to this tutorial. [0]
>
> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>
>
> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>> now  if use >hadoop fs -ls these commands well but when i use the commands
>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>
>> the cluster is not processing the job, what might be the problem, please
>> help me, when i see logs,nothing in the logs. please help me it is very
>> urget.
>>
>> thanks in advance.
>>
>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by mallik arjun <ma...@gmail.com>.
it is the not the problem of MaxTemperature.jar,even the command of any
>hadoop jar  xxx.jar  input output

when i run the command , it is like [image: Inline image 1]


On Sun, Mar 10, 2013 at 8:03 AM, feng lu <am...@gmail.com> wrote:

> Hi mallik
>
> Do you submit the job to JobTrackter? like this code
> JobClient.runJob(conf) in your MaxTemperature.jar package.
>
> maybe you can refer to this tutorial. [0]
>
> [0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html
>
>
> On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:
>
>> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
>> now  if use >hadoop fs -ls these commands well but when i use the commands
>> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>>
>> the cluster is not processing the job, what might be the problem, please
>> help me, when i see logs,nothing in the logs. please help me it is very
>> urget.
>>
>> thanks in advance.
>>
>
>
>
> --
> Don't Grow Old, Grow Up... :-)
>

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi mallik

Do you submit the job to JobTrackter? like this code JobClient.runJob(conf)
in your MaxTemperature.jar package.

maybe you can refer to this tutorial. [0]

[0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html


On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:

> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
> now  if use >hadoop fs -ls these commands well but when i use the commands
> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>
> the cluster is not processing the job, what might be the problem, please
> help me, when i see logs,nothing in the logs. please help me it is very
> urget.
>
> thanks in advance.
>



-- 
Don't Grow Old, Grow Up... :-)

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi mallik

Do you submit the job to JobTrackter? like this code JobClient.runJob(conf)
in your MaxTemperature.jar package.

maybe you can refer to this tutorial. [0]

[0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html


On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:

> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
> now  if use >hadoop fs -ls these commands well but when i use the commands
> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>
> the cluster is not processing the job, what might be the problem, please
> help me, when i see logs,nothing in the logs. please help me it is very
> urget.
>
> thanks in advance.
>



-- 
Don't Grow Old, Grow Up... :-)

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi mallik

Do you submit the job to JobTrackter? like this code JobClient.runJob(conf)
in your MaxTemperature.jar package.

maybe you can refer to this tutorial. [0]

[0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html


On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:

> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
> now  if use >hadoop fs -ls these commands well but when i use the commands
> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>
> the cluster is not processing the job, what might be the problem, please
> help me, when i see logs,nothing in the logs. please help me it is very
> urget.
>
> thanks in advance.
>



-- 
Don't Grow Old, Grow Up... :-)

Re: hadoop cluster not working

Posted by feng lu <am...@gmail.com>.
Hi mallik

Do you submit the job to JobTrackter? like this code JobClient.runJob(conf)
in your MaxTemperature.jar package.

maybe you can refer to this tutorial. [0]

[0] http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html


On Sun, Mar 10, 2013 at 10:05 AM, mallik arjun <ma...@gmail.com>wrote:

> hai guys i am using hadoop version 1.0.3 , it was ran well before. even
> now  if use >hadoop fs -ls these commands well but when i use the commands
> like >hadoop jar /home/mallik/definite/MaxTemperature.jar  input  outputmap
>
> the cluster is not processing the job, what might be the problem, please
> help me, when i see logs,nothing in the logs. please help me it is very
> urget.
>
> thanks in advance.
>



-- 
Don't Grow Old, Grow Up... :-)