You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by A Df <ab...@yahoo.com> on 2011/08/10 14:31:40 UTC

Where is web interface in stand alone operation?

Dear All:

I know that in pseudo mode that there is a web interface for the NameNode and the JobTracker but where is it for the standalone operation? The Hadoop page at http://hadoop.apache.org/common/docs/current/single_node_setup.html just shows to run the jar example but how do you view job details? For example time to complete etc. I know it will not be as detailed as the other modes but I wanted to compare the job peformance in standalone vs pseudo mode. Thank you.


Cheers,
A Df

Re: Where is web interface in stand alone operation?

Posted by Kai Voigt <k...@123.org>.
Hi,

just connect to http://localhost:50070/ or http://localhost:50030/ to access the web interfaces.

Kai

Am 10.08.2011 um 14:31 schrieb A Df:

> Dear All:
> 
> I know that in pseudo mode that there is a web interface for the NameNode and the JobTracker but where is it for the standalone operation? The Hadoop page at http://hadoop.apache.org/common/docs/current/single_node_setup.html just shows to run the jar example but how do you view job details? For example time to complete etc. I know it will not be as detailed as the other modes but I wanted to compare the job peformance in standalone vs pseudo mode. Thank you.
> 
> 
> Cheers,
> A Df

-- 
Kai Voigt
k@123.org





Re: Where is web interface in stand alone operation?

Posted by Kai Voigt <k...@123.org>.
Hi,

for further clarification. Are you running in standalone mode (1 JVM which runs everything) or in pseudo-distributed mode (1 Machine, but with 5 JVMs)? With PDM, you can access the web interfaces on ports 50030 and 50070. With SM, you should be able to at least to process monitoring (unix time command, various trace commands)

Kai

Am 10.08.2011 um 15:58 schrieb A Df:

> 
> 
> Hello Harsh:
> 
> See inline at *
> 
> 
>> ________________________________
>> From: Harsh J <ha...@cloudera.com>
>> To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>> Sent: Wednesday, 10 August 2011, 14:44
>> Subject: Re: Where is web interface in stand alone operation?
>> 
>> A Df,
>> 
>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>> standalone/'local'/'file:///' modes, these daemons aren't run
>> (actually, no daemon is run at all), and hence there would be no 'web'
>> interface.
>> 
>> *ok, but is there any other way to check the performance in this mode such as time to complete etc? I am trying to compare performance between the two. And also for the pseudo mode how would I change the ports for the web interface because I have to connect to a remote server which only allows certain ports to be accessed from the web?
>> 
>> On Wed, Aug 10, 2011 at 6:01 PM, A Df <ab...@yahoo.com> wrote:
>>> Dear All:
>>> 
>>> I know that in pseudo mode that there is a web interface for the NameNode and the JobTracker but where is it for the standalone operation? The Hadoop page at http://hadoop.apache.org/common/docs/current/single_node_setup.html just shows to run the jar example but how do you view job details? For example time to complete etc. I know it will not be as detailed as the other modes but I wanted to compare the job peformance in standalone vs pseudo mode. Thank you.
>>> 
>>> 
>>> Cheers,
>>> A Df
>>> 
>> 
>> 
>> 
>> -- 
>> Harsh J
>> 
>> 

-- 
Kai Voigt
k@123.org





Re: Where is web interface in stand alone operation?

Posted by A Df <ab...@yahoo.com>.
Hello:

I can not add inline so here I go again. I check the datanode logs and it had a problem with the namespaceid for the namenode and datanode. I am not sure why since I did not change those variables. So sample log for those interested is below and my message continues after it. 


2011-08-11 10:23:58,630 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: ST
ARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ngs.wmin.ac.uk/161.74.12.97
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/b
ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-08-11 10:23:59,208 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: j
ava.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-w1153435/dfs/data:
namenode namespaceID = 915370409; datanode namespaceID = 1914136941
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataS
torage.java:233)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionR
ead(DataStorage.java:148)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNod
e.java:298)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:
216)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode
.java:1283)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(D
ataNode.java:1238)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNo
de.java:1246)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:13
68)

2011-08-11 10:23:59,209 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SH
UTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ngs.wmin.ac.uk/161.74.12.97
************************************************************/


I stopped hadoop, deleted the data from the dfs.data.dir and restarted all. I had to also delete the input and output directories and setup those again, then it ran properly. I also tried the web interface and they both worked. Thanks. 


I will have a look at the logs however, since in standalone it does not use logs, another user suggesting using the time command and trace. How would I use strace since the job runs to completion and there is no time in between to run that command? I wanted to get job details such as those produced in the logs for the pseudo operation but instead details of the Java process. Is there a way to check the Java process for the standalone job while its running or afterwards?


Cheers,
A Df




>________________________________
>From: Harsh J <ha...@cloudera.com>
>To: A Df <ab...@yahoo.com>
>Cc: "common-user@hadoop.apache.org" <co...@hadoop.apache.org>
>Sent: Thursday, 11 August 2011, 10:37
>Subject: Re: Where is web interface in stand alone operation?
>
>Looks like your DataNode isn't properly up. Wipe your dfs.data.dir
>directory and restart your DN (might be cause of the formatting
>troubles you had earlier). Take a look at your DN's logs though, to
>confirm and understand what's going wrong.
>
>On Thu, Aug 11, 2011 at 3:03 PM, A Df <ab...@yahoo.com> wrote:
>> Hi again:
>>
>> I did format the namenode and it had a problem with a folder being locked. I
>> tried again and it formatted but still unable to work. I tried to copy input
>> files and run example jar. It gives:
>> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -put input input
>> 11/08/11 10:25:11 WARN hdfs.DFSClient: DataStreamer Exception:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
>> only be replicated to 0 nodes, instead of 1
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>         at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>         at $Proxy0.addBlock(Unknown Source)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>         at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>         at $Proxy0.addBlock(Unknown Source)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>>
>> 11/08/11 10:25:11 WARN hdfs.DFSClient: Error Recovery for block null bad
>> datanode[0] nodes == null
>> 11/08/11 10:25:11 WARN hdfs.DFSClient: Could not get block locations. Source
>> file "/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt" -
>> Aborting...
>> put: java.io.IOException: File
>> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
>> only be replicated to 0 nodes, instead of 1
>> 11/08/11 10:25:11 ERROR hdfs.DFSClient: Exception closing file
>> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt :
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
>> only be replicated to 0 nodes, instead of 1
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
>> only be replicated to 0 nodes, instead of 1
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>
>>         at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>         at $Proxy0.addBlock(Unknown Source)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>         at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>         at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>         at $Proxy0.addBlock(Unknown Source)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -ls
>> Found 1 items
>> drwxr-xr-x   - my-user supergroup          0 2011-08-11 10:25
>> /user/my-user/input
>> my-user@ngs:~/hadoop-0.20.2_pseudo> ls
>> bin          docs                        input        logs
>> build.xml    hadoop-0.20.2-ant.jar       ivy          NOTICE.txt
>> c++          hadoop-0.20.2-core.jar      ivy.xml      README.txt
>> CHANGES.txt  hadoop-0.20.2-examples.jar  lib          src
>> conf         hadoop-0.20.2-test.jar      librecordio  webapps
>> contrib      hadoop-0.20.2-tools.jar     LICENSE.txt
>> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar
>> wordcount input output
>> Exception in thread "main" java.lang.NoClassDefFoundError:
>> hadoop-0/20/2-examples/jar
>> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar
>> grep input output 'dfs[a-z.]+'
>> Exception in thread "main" java.lang.NoClassDefFoundError:
>> hadoop-0/20/2-examples/jar
>>
>>
>> ________________________________
>> From: Harsh J <ha...@cloudera.com>
>> To: common-user@hadoop.apache.org
>> Sent: Thursday, 11 August 2011, 6:28
>> Subject: Re: Where is web interface in stand alone operation?
>>
>> Note: NameNode format affects the directory specified by "dfs.name.dir"
>>
>> On Thu, Aug 11, 2011 at 10:57 AM, Harsh J <ha...@cloudera.com> wrote:
>>> Have you done the following?
>>>
>>> bin/hadoop namenode -format
>>>
>>> On Thu, Aug 11, 2011 at 10:50 AM, A Df <ab...@yahoo.com>
>>> wrote:
>>>> Hello Again:
>>>> I extracted hadoop and changed the xml as shwon in the tutorial but now
>>>> it
>>>> seems it connect get a connection. I am using putty to ssh to the server
>>>> and
>>>> I change the config files to set it up in pseudo mode as shown
>>>> conf/core-site.xml:
>>>> <configuration>
>>>>      <property>
>>>>          <name>fs.default.name</name>
>>>>          <value>hdfs://localhost:9000</
>>>> value>
>>>>      </property>
>>>> </configuration>
>>>> hdfs-site.xml:
>>>> <configuration>
>>>>      <property>
>>>>          <name>dfs.replication</name>
>>>>          <value>1</value>
>>>>      </property>
>>>>      <property>
>>>>          <name>dfs.http.address</name>
>>>>          <value>0.0.0.0:3500</value>
>>>>      </property>
>>>> </configuration>
>>>>
>>>> conf/mapred-site.xml:
>>>> <configuration>
>>>>      <property>
>>>>          <name>mapred.job.tracker</name>
>>>>          <value>localhost:9001</value>
>>>>      </property>
>>>>      <property>
>>>>          <name>mapred.job.tracker.http.address</name>
>>>>          <value>0.0.0.0:3501</value>
>>>>      </property>
>>>> </configuration>
>>>>
>>>> I tried to format the namenode, started all processes but I notice that
>>>> when
>>>> I stop it, it said that the namenode was not running.When I try to run
>>>> the
>>>> example jar, it keeps timing out when connecting to 127.0.0.1:port#. I
>>>> used
>>>> various port numbers and tried replacing localhost with the name for the
>>>> server but it still times out. It also has a long ip address for
>>>> name.server.ac.uk/161.74.12.97:3000 which seems to repeating itself since
>>>> name.server.ac.uk already has the ip address of 161.74.12.97. The console
>>>> message is shown below. I was also having problems where it did not want
>>>> to
>>>> format the namenode.
>>>>
>>>> Is there something is wrong with connecting to the namenode and what
>>>> cause
>>>> it to not format?
>>>>
>>>>
>>>> 2011-08-11 05:49:13,529 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = name.server.ac.uk/161.74.12.97
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 0.20.2
>>>> STARTUP_MSG:   build =
>>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>>>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>>> ************************************************************/
>>>> 2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>>>> Initializing RPC Metrics with hostName=NameNode, port=3000
>>>> 2011-08-11 05:49:13,669 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>>>> name.server.ac.uk/161.74.12.97:3000
>>>> 2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>>>> 2011-08-11 05:49:13,674 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
>>>> Initializing
>>>> NameNodeMeterics using context
>>>> object:org.apache.hadoop.metrics.spi.NullContext
>>>> 2011-08-11 05:49:13,755 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> fsOwner=my-user,users,cluster_login
>>>> 2011-08-11 05:49:13,755 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> supergroup=supergroup
>>>> 2011-08-11 05:49:13,756 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>> isPermissionEnabled=true
>>>> 2011-08-11 05:49:13,768 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>>>> Initializing FSNamesystemMetrics using context
>>>> object:org.apache.hadoop.metrics.spi.NullContext
>>>> 2011-08-11 05:49:13,770 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>>> FSNamesystemStatusMBean
>>>> 2011-08-11 05:49:13,812 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>>> initialization failed.
>>>> java.io.IOException: NameNode is not formatted.
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>> 2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping
>>>> server
>>>> on 3000
>>>> 2011-08-11 05:49:13,814 ERROR
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>>> NameNode is not formatted.
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>>>     at
>>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>>>     at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>>
>>>> 2011-08-11 05:49:13,814 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
>>>> ************************************************************/
>>>>
>>>> Thank you,
>>>> A Df
>>>>
>>>> ________________________________
>>>> From: Harsh J <ha...@cloudera.com>
>>>> To: A Df <ab...@yahoo.com>
>>>> Sent: Wednesday, 10 August 2011, 15:13
>>>> Subject: Re: Where is web interface in stand alone operation?
>>>>
>>>> A Df,
>>>>
>>>> On Wed, Aug 10, 2011 at 7:28 PM, A Df <ab...@yahoo.com>
>>>> wrote:
>>>>>
>>>>> Hello Harsh:
>>>>> See inline at *
>>>>>
>>>>> ________________________________
>>>>> From: Harsh J <ha...@cloudera.com>
>>>>> To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>>>>> Sent: Wednesday, 10 August 2011, 14:44
>>>>> Subject: Re: Where is web interface in stand alone operation?
>>>>>
>>>>> A Df,
>>>>>
>>>>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>>>>> standalone/'local'/'file:///' modes, these daemons aren't run
>>>>> (actually, no daemon is run at all), and hence there would be no 'web'
>>>>> interface.
>>>>>
>>>>> *ok, but is there any other way to check the performance in this mode
>>>>> such
>>>>> as time to complete etc? I am trying to compare performance between the
>>>>> two.
>>>>> And also for the pseudo mode how would I change the ports for the web
>>>>> interface because I have to connect to a remote server which only allows
>>>>> certain ports to be accessed from the web?
>>>>
>>>> The ports Kai mentioned above are sourced from the configs:
>>>> dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
>>>> (mapred-site.xml). You can change them to bind to a host:port of your
>>>> preference.
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>>
>>
>
>
>
>-- 
>Harsh J
>
>
>

Re: Where is web interface in stand alone operation?

Posted by Harsh J <ha...@cloudera.com>.
Looks like your DataNode isn't properly up. Wipe your dfs.data.dir
directory and restart your DN (might be cause of the formatting
troubles you had earlier). Take a look at your DN's logs though, to
confirm and understand what's going wrong.

On Thu, Aug 11, 2011 at 3:03 PM, A Df <ab...@yahoo.com> wrote:
> Hi again:
>
> I did format the namenode and it had a problem with a folder being locked. I
> tried again and it formatted but still unable to work. I tried to copy input
> files and run example jar. It gives:
> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -put input input
> 11/08/11 10:25:11 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
> only be replicated to 0 nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:740)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
>
> 11/08/11 10:25:11 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null
> 11/08/11 10:25:11 WARN hdfs.DFSClient: Could not get block locations. Source
> file "/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt" -
> Aborting...
> put: java.io.IOException: File
> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
> only be replicated to 0 nodes, instead of 1
> 11/08/11 10:25:11 ERROR hdfs.DFSClient: Exception closing file
> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt :
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
> only be replicated to 0 nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
> only be replicated to 0 nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:740)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -ls
> Found 1 items
> drwxr-xr-x   - my-user supergroup          0 2011-08-11 10:25
> /user/my-user/input
> my-user@ngs:~/hadoop-0.20.2_pseudo> ls
> bin          docs                        input        logs
> build.xml    hadoop-0.20.2-ant.jar       ivy          NOTICE.txt
> c++          hadoop-0.20.2-core.jar      ivy.xml      README.txt
> CHANGES.txt  hadoop-0.20.2-examples.jar  lib          src
> conf         hadoop-0.20.2-test.jar      librecordio  webapps
> contrib      hadoop-0.20.2-tools.jar     LICENSE.txt
> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar
> wordcount input output
> Exception in thread "main" java.lang.NoClassDefFoundError:
> hadoop-0/20/2-examples/jar
> my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar
> grep input output 'dfs[a-z.]+'
> Exception in thread "main" java.lang.NoClassDefFoundError:
> hadoop-0/20/2-examples/jar
>
>
> ________________________________
> From: Harsh J <ha...@cloudera.com>
> To: common-user@hadoop.apache.org
> Sent: Thursday, 11 August 2011, 6:28
> Subject: Re: Where is web interface in stand alone operation?
>
> Note: NameNode format affects the directory specified by "dfs.name.dir"
>
> On Thu, Aug 11, 2011 at 10:57 AM, Harsh J <ha...@cloudera.com> wrote:
>> Have you done the following?
>>
>> bin/hadoop namenode -format
>>
>> On Thu, Aug 11, 2011 at 10:50 AM, A Df <ab...@yahoo.com>
>> wrote:
>>> Hello Again:
>>> I extracted hadoop and changed the xml as shwon in the tutorial but now
>>> it
>>> seems it connect get a connection. I am using putty to ssh to the server
>>> and
>>> I change the config files to set it up in pseudo mode as shown
>>> conf/core-site.xml:
>>> <configuration>
>>>      <property>
>>>          <name>fs.default.name</name>
>>>          <value>hdfs://localhost:9000</
>>> value>
>>>      </property>
>>> </configuration>
>>> hdfs-site.xml:
>>> <configuration>
>>>      <property>
>>>          <name>dfs.replication</name>
>>>          <value>1</value>
>>>      </property>
>>>      <property>
>>>          <name>dfs.http.address</name>
>>>          <value>0.0.0.0:3500</value>
>>>      </property>
>>> </configuration>
>>>
>>> conf/mapred-site.xml:
>>> <configuration>
>>>      <property>
>>>          <name>mapred.job.tracker</name>
>>>          <value>localhost:9001</value>
>>>      </property>
>>>      <property>
>>>          <name>mapred.job.tracker.http.address</name>
>>>          <value>0.0.0.0:3501</value>
>>>      </property>
>>> </configuration>
>>>
>>> I tried to format the namenode, started all processes but I notice that
>>> when
>>> I stop it, it said that the namenode was not running.When I try to run
>>> the
>>> example jar, it keeps timing out when connecting to 127.0.0.1:port#. I
>>> used
>>> various port numbers and tried replacing localhost with the name for the
>>> server but it still times out. It also has a long ip address for
>>> name.server.ac.uk/161.74.12.97:3000 which seems to repeating itself since
>>> name.server.ac.uk already has the ip address of 161.74.12.97. The console
>>> message is shown below. I was also having problems where it did not want
>>> to
>>> format the namenode.
>>>
>>> Is there something is wrong with connecting to the namenode and what
>>> cause
>>> it to not format?
>>>
>>>
>>> 2011-08-11 05:49:13,529 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = name.server.ac.uk/161.74.12.97
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>> ************************************************************/
>>> 2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>>> Initializing RPC Metrics with hostName=NameNode, port=3000
>>> 2011-08-11 05:49:13,669 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>>> name.server.ac.uk/161.74.12.97:3000
>>> 2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>>> 2011-08-11 05:49:13,674 INFO
>>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
>>> Initializing
>>> NameNodeMeterics using context
>>> object:org.apache.hadoop.metrics.spi.NullContext
>>> 2011-08-11 05:49:13,755 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> fsOwner=my-user,users,cluster_login
>>> 2011-08-11 05:49:13,755 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> supergroup=supergroup
>>> 2011-08-11 05:49:13,756 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isPermissionEnabled=true
>>> 2011-08-11 05:49:13,768 INFO
>>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>>> Initializing FSNamesystemMetrics using context
>>> object:org.apache.hadoop.metrics.spi.NullContext
>>> 2011-08-11 05:49:13,770 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>> FSNamesystemStatusMBean
>>> 2011-08-11 05:49:13,812 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>> 2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping
>>> server
>>> on 3000
>>> 2011-08-11 05:49:13,814 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>>     at
>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>
>>> 2011-08-11 05:49:13,814 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
>>> ************************************************************/
>>>
>>> Thank you,
>>> A Df
>>>
>>> ________________________________
>>> From: Harsh J <ha...@cloudera.com>
>>> To: A Df <ab...@yahoo.com>
>>> Sent: Wednesday, 10 August 2011, 15:13
>>> Subject: Re: Where is web interface in stand alone operation?
>>>
>>> A Df,
>>>
>>> On Wed, Aug 10, 2011 at 7:28 PM, A Df <ab...@yahoo.com>
>>> wrote:
>>>>
>>>> Hello Harsh:
>>>> See inline at *
>>>>
>>>> ________________________________
>>>> From: Harsh J <ha...@cloudera.com>
>>>> To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>>>> Sent: Wednesday, 10 August 2011, 14:44
>>>> Subject: Re: Where is web interface in stand alone operation?
>>>>
>>>> A Df,
>>>>
>>>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>>>> standalone/'local'/'file:///' modes, these daemons aren't run
>>>> (actually, no daemon is run at all), and hence there would be no 'web'
>>>> interface.
>>>>
>>>> *ok, but is there any other way to check the performance in this mode
>>>> such
>>>> as time to complete etc? I am trying to compare performance between the
>>>> two.
>>>> And also for the pseudo mode how would I change the ports for the web
>>>> interface because I have to connect to a remote server which only allows
>>>> certain ports to be accessed from the web?
>>>
>>> The ports Kai mentioned above are sourced from the configs:
>>> dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
>>> (mapred-site.xml). You can change them to bind to a host:port of your
>>> preference.
>>>
>>>
>>> --
>>> Harsh J
>>>
>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>
>
>
> --
> Harsh J
>
>
>



-- 
Harsh J

Re: Where is web interface in stand alone operation?

Posted by A Df <ab...@yahoo.com>.
Hi again:


I did format the namenode and it had a problem with a folder being locked. I tried again and it formatted but still unable to work. I tried to copy input files and run example jar. It gives:

my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -put input input
11/08/11 10:25:11 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

        at org.apache.hadoop.ipc.Client.call(Client.java:740)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy0.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

11/08/11 10:25:11 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
11/08/11 10:25:11 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt" - Aborting...
put: java.io.IOException: File /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could only be replicated to 0 nodes, instead of 1
11/08/11 10:25:11 ERROR hdfs.DFSClient: Exception closing file /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

        at org.apache.hadoop.ipc.Client.call(Client.java:740)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
        at $Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at $Proxy0.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -ls
Found 1 items
drwxr-xr-x   - my-usersupergroup          0 2011-08-11 10:25 /user/my-user/input
my-user@ngs:~/hadoop-0.20.2_pseudo> ls
bin          docs                        input        logs
build.xml    hadoop-0.20.2-ant.jar       ivy          NOTICE.txt
c++          hadoop-0.20.2-core.jar      ivy.xml      README.txt
CHANGES.txt  hadoop-0.20.2-examples.jar  lib          src
conf         hadoop-0.20.2-test.jar      librecordio  webapps
contrib      hadoop-0.20.2-tools.jar     LICENSE.txt
my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar wordcount input output
Exception in thread "main" java.lang.NoClassDefFoundError: hadoop-0/20/2-examples/jar
my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar grep input output 'dfs[a-z.]+'
Exception in thread "main" java.lang.NoClassDefFoundError: hadoop-0/20/2-examples/jar





>________________________________
>From: Harsh J <ha...@cloudera.com>
>To: common-user@hadoop.apache.org
>Sent: Thursday, 11 August 2011, 6:28
>Subject: Re: Where is web interface in stand alone operation?
>
>Note: NameNode format affects the directory specified by "dfs.name.dir"
>
>On Thu, Aug 11, 2011 at 10:57 AM, Harsh J <ha...@cloudera.com> wrote:
>> Have you done the following?
>>
>> bin/hadoop namenode -format
>>
>> On Thu, Aug 11, 2011 at 10:50 AM, A Df <ab...@yahoo.com> wrote:
>>> Hello Again:
>>> I extracted hadoop and changed the xml as shwon in the tutorial but now it
>>> seems it connect get a connection. I am using putty to ssh to the server and
>>> I change the config files to set it up in pseudo mode as shown
>>> conf/core-site.xml:
>>> <configuration>
>>>      <property>
>>>          <name>fs.default.name</name>
>>>          <value>hdfs://localhost:9000</
>>> value>
>>>      </property>
>>> </configuration>
>>> hdfs-site.xml:
>>> <configuration>
>>>      <property>
>>>          <name>dfs.replication</name>
>>>          <value>1</value>
>>>      </property>
>>>      <property>
>>>          <name>dfs.http.address</name>
>>>          <value>0.0.0.0:3500</value>
>>>      </property>
>>> </configuration>
>>>
>>> conf/mapred-site.xml:
>>> <configuration>
>>>      <property>
>>>          <name>mapred.job.tracker</name>
>>>          <value>localhost:9001</value>
>>>      </property>
>>>      <property>
>>>          <name>mapred.job.tracker.http.address</name>
>>>          <value>0.0.0.0:3501</value>
>>>      </property>
>>> </configuration>
>>>
>>> I tried to format the namenode, started all processes but I notice that when
>>> I stop it, it said that the namenode was not running.When I try to run the
>>> example jar, it keeps timing out when connecting to 127.0.0.1:port#. I used
>>> various port numbers and tried replacing localhost with the name for the
>>> server but it still times out. It also has a long ip address for
>>> name.server.ac.uk/161.74.12.97:3000 which seems to repeating itself since
>>> name.server.ac.uk already has the ip address of 161.74.12.97. The console
>>> message is shown below. I was also having problems where it did not want to
>>> format the namenode.
>>>
>>> Is there something is wrong with connecting to the namenode and what cause
>>> it to not format?
>>>
>>>
>>> 2011-08-11 05:49:13,529 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = name.server.ac.uk/161.74.12.97
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>> ************************************************************/
>>> 2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>>> Initializing RPC Metrics with hostName=NameNode, port=3000
>>> 2011-08-11 05:49:13,669 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>>> name.server.ac.uk/161.74.12.97:3000
>>> 2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>>> 2011-08-11 05:49:13,674 INFO
>>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>>> NameNodeMeterics using context
>>> object:org.apache.hadoop.metrics.spi.NullContext
>>> 2011-08-11 05:49:13,755 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> fsOwner=my-user,users,cluster_login
>>> 2011-08-11 05:49:13,755 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>>> 2011-08-11 05:49:13,756 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>> isPermissionEnabled=true
>>> 2011-08-11 05:49:13,768 INFO
>>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>>> Initializing FSNamesystemMetrics using context
>>> object:org.apache.hadoop.metrics.spi.NullContext
>>> 2011-08-11 05:49:13,770 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>>> FSNamesystemStatusMBean
>>> 2011-08-11 05:49:13,812 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>>> initialization failed.
>>> java.io.IOException: NameNode is not formatted.
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>> 2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping server
>>> on 3000
>>> 2011-08-11 05:49:13,814 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>>> NameNode is not formatted.
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>>     at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>>
>>> 2011-08-11 05:49:13,814 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
>>> ************************************************************/
>>>
>>> Thank you,
>>> A Df
>>>
>>> ________________________________
>>> From: Harsh J <ha...@cloudera.com>
>>> To: A Df <ab...@yahoo.com>
>>> Sent: Wednesday, 10 August 2011, 15:13
>>> Subject: Re: Where is web interface in stand alone operation?
>>>
>>> A Df,
>>>
>>> On Wed, Aug 10, 2011 at 7:28 PM, A Df <ab...@yahoo.com> wrote:
>>>>
>>>> Hello Harsh:
>>>> See inline at *
>>>>
>>>> ________________________________
>>>> From: Harsh J <ha...@cloudera.com>
>>>> To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>>>> Sent: Wednesday, 10 August 2011, 14:44
>>>> Subject: Re: Where is web interface in stand alone operation?
>>>>
>>>> A Df,
>>>>
>>>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>>>> standalone/'local'/'file:///' modes, these daemons aren't run
>>>> (actually, no daemon is run at all), and hence there would be no 'web'
>>>> interface.
>>>>
>>>> *ok, but is there any other way to check the performance in this mode such
>>>> as time to complete etc? I am trying to compare performance between the
>>>> two.
>>>> And also for the pseudo mode how would I change the ports for the web
>>>> interface because I have to connect to a remote server which only allows
>>>> certain ports to be accessed from the web?
>>>
>>> The ports Kai mentioned above are sourced from the configs:
>>> dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
>>> (mapred-site.xml). You can change them to bind to a host:port of your
>>> preference.
>>>
>>>
>>> --
>>> Harsh J
>>>
>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>
>
>
>-- 
>Harsh J
>
>
>

Re: Where is web interface in stand alone operation?

Posted by Harsh J <ha...@cloudera.com>.
Note: NameNode format affects the directory specified by "dfs.name.dir"

On Thu, Aug 11, 2011 at 10:57 AM, Harsh J <ha...@cloudera.com> wrote:
> Have you done the following?
>
> bin/hadoop namenode -format
>
> On Thu, Aug 11, 2011 at 10:50 AM, A Df <ab...@yahoo.com> wrote:
>> Hello Again:
>> I extracted hadoop and changed the xml as shwon in the tutorial but now it
>> seems it connect get a connection. I am using putty to ssh to the server and
>> I change the config files to set it up in pseudo mode as shown
>> conf/core-site.xml:
>> <configuration>
>>      <property>
>>          <name>fs.default.name</name>
>>          <value>hdfs://localhost:9000</
>> value>
>>      </property>
>> </configuration>
>> hdfs-site.xml:
>> <configuration>
>>      <property>
>>          <name>dfs.replication</name>
>>          <value>1</value>
>>      </property>
>>      <property>
>>          <name>dfs.http.address</name>
>>          <value>0.0.0.0:3500</value>
>>      </property>
>> </configuration>
>>
>> conf/mapred-site.xml:
>> <configuration>
>>      <property>
>>          <name>mapred.job.tracker</name>
>>          <value>localhost:9001</value>
>>      </property>
>>      <property>
>>          <name>mapred.job.tracker.http.address</name>
>>          <value>0.0.0.0:3501</value>
>>      </property>
>> </configuration>
>>
>> I tried to format the namenode, started all processes but I notice that when
>> I stop it, it said that the namenode was not running.When I try to run the
>> example jar, it keeps timing out when connecting to 127.0.0.1:port#. I used
>> various port numbers and tried replacing localhost with the name for the
>> server but it still times out. It also has a long ip address for
>> name.server.ac.uk/161.74.12.97:3000 which seems to repeating itself since
>> name.server.ac.uk already has the ip address of 161.74.12.97. The console
>> message is shown below. I was also having problems where it did not want to
>> format the namenode.
>>
>> Is there something is wrong with connecting to the namenode and what cause
>> it to not format?
>>
>>
>> 2011-08-11 05:49:13,529 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting NameNode
>> STARTUP_MSG:   host = name.server.ac.uk/161.74.12.97
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2
>> STARTUP_MSG:   build =
>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
>> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>> ************************************************************/
>> 2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=NameNode, port=3000
>> 2011-08-11 05:49:13,669 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
>> name.server.ac.uk/161.74.12.97:3000
>> 2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=NameNode, sessionId=null
>> 2011-08-11 05:49:13,674 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
>> NameNodeMeterics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2011-08-11 05:49:13,755 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> fsOwner=w1153435,users,cluster_login
>> 2011-08-11 05:49:13,755 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
>> 2011-08-11 05:49:13,756 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>> isPermissionEnabled=true
>> 2011-08-11 05:49:13,768 INFO
>> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
>> Initializing FSNamesystemMetrics using context
>> object:org.apache.hadoop.metrics.spi.NullContext
>> 2011-08-11 05:49:13,770 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
>> FSNamesystemStatusMBean
>> 2011-08-11 05:49:13,812 ERROR
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
>> initialization failed.
>> java.io.IOException: NameNode is not formatted.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>> 2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping server
>> on 3000
>> 2011-08-11 05:49:13,814 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
>> NameNode is not formatted.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>>
>> 2011-08-11 05:49:13,814 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
>> ************************************************************/
>>
>> Thank you,
>> A Df
>>
>> ________________________________
>> From: Harsh J <ha...@cloudera.com>
>> To: A Df <ab...@yahoo.com>
>> Sent: Wednesday, 10 August 2011, 15:13
>> Subject: Re: Where is web interface in stand alone operation?
>>
>> A Df,
>>
>> On Wed, Aug 10, 2011 at 7:28 PM, A Df <ab...@yahoo.com> wrote:
>>>
>>> Hello Harsh:
>>> See inline at *
>>>
>>> ________________________________
>>> From: Harsh J <ha...@cloudera.com>
>>> To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>>> Sent: Wednesday, 10 August 2011, 14:44
>>> Subject: Re: Where is web interface in stand alone operation?
>>>
>>> A Df,
>>>
>>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>>> standalone/'local'/'file:///' modes, these daemons aren't run
>>> (actually, no daemon is run at all), and hence there would be no 'web'
>>> interface.
>>>
>>> *ok, but is there any other way to check the performance in this mode such
>>> as time to complete etc? I am trying to compare performance between the
>>> two.
>>> And also for the pseudo mode how would I change the ports for the web
>>> interface because I have to connect to a remote server which only allows
>>> certain ports to be accessed from the web?
>>
>> The ports Kai mentioned above are sourced from the configs:
>> dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
>> (mapred-site.xml). You can change them to bind to a host:port of your
>> preference.
>>
>>
>> --
>> Harsh J
>>
>>
>>
>
>
>
> --
> Harsh J
>



-- 
Harsh J

Re: Where is web interface in stand alone operation?

Posted by Harsh J <ha...@cloudera.com>.
Have you done the following?

bin/hadoop namenode -format

On Thu, Aug 11, 2011 at 10:50 AM, A Df <ab...@yahoo.com> wrote:
> Hello Again:
> I extracted hadoop and changed the xml as shwon in the tutorial but now it
> seems it connect get a connection. I am using putty to ssh to the server and
> I change the config files to set it up in pseudo mode as shown
> conf/core-site.xml:
> <configuration>
>      <property>
>          <name>fs.default.name</name>
>          <value>hdfs://localhost:9000</
> value>
>      </property>
> </configuration>
> hdfs-site.xml:
> <configuration>
>      <property>
>          <name>dfs.replication</name>
>          <value>1</value>
>      </property>
>      <property>
>          <name>dfs.http.address</name>
>          <value>0.0.0.0:3500</value>
>      </property>
> </configuration>
>
> conf/mapred-site.xml:
> <configuration>
>      <property>
>          <name>mapred.job.tracker</name>
>          <value>localhost:9001</value>
>      </property>
>      <property>
>          <name>mapred.job.tracker.http.address</name>
>          <value>0.0.0.0:3501</value>
>      </property>
> </configuration>
>
> I tried to format the namenode, started all processes but I notice that when
> I stop it, it said that the namenode was not running.When I try to run the
> example jar, it keeps timing out when connecting to 127.0.0.1:port#. I used
> various port numbers and tried replacing localhost with the name for the
> server but it still times out. It also has a long ip address for
> name.server.ac.uk/161.74.12.97:3000 which seems to repeating itself since
> name.server.ac.uk already has the ip address of 161.74.12.97. The console
> message is shown below. I was also having problems where it did not want to
> format the namenode.
>
> Is there something is wrong with connecting to the namenode and what cause
> it to not format?
>
>
> 2011-08-11 05:49:13,529 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = name.server.ac.uk/161.74.12.97
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.20.2
> STARTUP_MSG:   build =
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
> ************************************************************/
> 2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=NameNode, port=3000
> 2011-08-11 05:49:13,669 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
> name.server.ac.uk/161.74.12.97:3000
> 2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2011-08-11 05:49:13,674 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2011-08-11 05:49:13,755 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> fsOwner=w1153435,users,cluster_login
> 2011-08-11 05:49:13,755 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
> 2011-08-11 05:49:13,756 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled=true
> 2011-08-11 05:49:13,768 INFO
> org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
> Initializing FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NullContext
> 2011-08-11 05:49:13,770 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> FSNamesystemStatusMBean
> 2011-08-11 05:49:13,812 ERROR
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
> initialization failed.
> java.io.IOException: NameNode is not formatted.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
> 2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 3000
> 2011-08-11 05:49:13,814 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> NameNode is not formatted.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
>
> 2011-08-11 05:49:13,814 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
> ************************************************************/
>
> Thank you,
> A Df
>
> ________________________________
> From: Harsh J <ha...@cloudera.com>
> To: A Df <ab...@yahoo.com>
> Sent: Wednesday, 10 August 2011, 15:13
> Subject: Re: Where is web interface in stand alone operation?
>
> A Df,
>
> On Wed, Aug 10, 2011 at 7:28 PM, A Df <ab...@yahoo.com> wrote:
>>
>> Hello Harsh:
>> See inline at *
>>
>> ________________________________
>> From: Harsh J <ha...@cloudera.com>
>> To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>> Sent: Wednesday, 10 August 2011, 14:44
>> Subject: Re: Where is web interface in stand alone operation?
>>
>> A Df,
>>
>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>> standalone/'local'/'file:///' modes, these daemons aren't run
>> (actually, no daemon is run at all), and hence there would be no 'web'
>> interface.
>>
>> *ok, but is there any other way to check the performance in this mode such
>> as time to complete etc? I am trying to compare performance between the
>> two.
>> And also for the pseudo mode how would I change the ports for the web
>> interface because I have to connect to a remote server which only allows
>> certain ports to be accessed from the web?
>
> The ports Kai mentioned above are sourced from the configs:
> dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
> (mapred-site.xml). You can change them to bind to a host:port of your
> preference.
>
>
> --
> Harsh J
>
>
>



-- 
Harsh J

Re: Where is web interface in stand alone operation?

Posted by A Df <ab...@yahoo.com>.
Hello Again:

I extracted hadoop and changed the xml as shwon in the tutorial but now it seems it connect get a connection. I am using putty to ssh to the server and I change the config files to set it up in pseudo mode as shown

conf/core-site.xml:
<configuration>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</
value>
     </property>
</configuration>

hdfs-site.xml:
<configuration>
     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
     <property>
         <name>dfs.http.address</name>
         <value>0.0.0.0:3500</value>
     </property>
</configuration>


conf/mapred-site.xml:
<configuration>
     <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9001</value>
     </property>
     <property>
         <name>mapred.job.tracker.http.address</name>
         <value>0.0.0.0:3501</value>
     </property>
</configuration>

I tried to format the namenode, started all processes but I notice that when I stop it, it said that the namenode was not running.When I try to run the example jar, it keeps
 timing out when connecting to 127.0.0.1:port#. I used various port 
numbers and tried replacing localhost with the name for the server but it still times out. 
It also has a long ip address for name.server.ac.uk/161.74.12.97:3000 which seems to repeating itself since name.server.ac.uk already has the ip address of 161.74.12.97. The console message is shown below. I was also having problems where it did not want to format the namenode. 

Is there something is wrong with connecting to the namenode and what cause it to not format?


2011-08-11 05:49:13,529 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = name.server.ac.uk/161.74.12.97
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=3000
2011-08-11 05:49:13,669 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: name.server.ac.uk/161.74.12.97:3000
2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
2011-08-11 05:49:13,674 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
2011-08-11 05:49:13,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=w1153435,users,cluster_login
2011-08-11 05:49:13,755 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-08-11 05:49:13,756 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-08-11 05:49:13,768 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
2011-08-11 05:49:13,770 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStatusMBean
2011-08-11 05:49:13,812 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping server on 3000
2011-08-11 05:49:13,814 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

2011-08-11 05:49:13,814 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
************************************************************/

Thank you,
A Df




>________________________________
>From: Harsh J <ha...@cloudera.com>
>To: A Df <ab...@yahoo.com>
>Sent: Wednesday, 10 August 2011, 15:13
>Subject: Re: Where is web interface in stand alone operation?
>
>A Df,
>
>On Wed, Aug 10, 2011 at 7:28 PM, A Df <ab...@yahoo.com> wrote:
>>
>> Hello Harsh:
>> See inline at *
>>
>> ________________________________
>> From: Harsh J <ha...@cloudera.com>
>> To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>> Sent: Wednesday, 10 August 2011, 14:44
>> Subject: Re: Where is web interface in stand alone operation?
>>
>> A Df,
>>
>> The web UIs are a feature of the daemons JobTracker and NameNode. In
>> standalone/'local'/'file:///' modes, these daemons aren't run
>> (actually, no daemon is run at all), and hence there would be no 'web'
>> interface.
>>
>> *ok, but is there any other way to check the performance in this mode such
>> as time to complete etc? I am trying to compare performance between the two.
>> And also for the pseudo mode how would I change the ports for the web
>> interface because I have to connect to a remote server which only allows
>> certain ports to be accessed from the web?
>
>The ports Kai mentioned above are sourced from the configs:
>dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
>(mapred-site.xml). You can change them to bind to a host:port of your
>preference.
>
>
>-- 
>Harsh J
>
>
>

Re: Where is web interface in stand alone operation?

Posted by A Df <ab...@yahoo.com>.

Hello Harsh:

See inline at *


>________________________________
>From: Harsh J <ha...@cloudera.com>
>To: common-user@hadoop.apache.org; A Df <ab...@yahoo.com>
>Sent: Wednesday, 10 August 2011, 14:44
>Subject: Re: Where is web interface in stand alone operation?
>
>A Df,
>
>The web UIs are a feature of the daemons JobTracker and NameNode. In
>standalone/'local'/'file:///' modes, these daemons aren't run
>(actually, no daemon is run at all), and hence there would be no 'web'
>interface.
>
>*ok, but is there any other way to check the performance in this mode such as time to complete etc? I am trying to compare performance between the two. And also for the pseudo mode how would I change the ports for the web interface because I have to connect to a remote server which only allows certain ports to be accessed from the web?
>
>On Wed, Aug 10, 2011 at 6:01 PM, A Df <ab...@yahoo.com> wrote:
>> Dear All:
>>
>> I know that in pseudo mode that there is a web interface for the NameNode and the JobTracker but where is it for the standalone operation? The Hadoop page at http://hadoop.apache.org/common/docs/current/single_node_setup.html just shows to run the jar example but how do you view job details? For example time to complete etc. I know it will not be as detailed as the other modes but I wanted to compare the job peformance in standalone vs pseudo mode. Thank you.
>>
>>
>> Cheers,
>> A Df
>>
>
>
>
>-- 
>Harsh J
>
>
>

Re: Where is web interface in stand alone operation?

Posted by Harsh J <ha...@cloudera.com>.
A Df,

The web UIs are a feature of the daemons JobTracker and NameNode. In
standalone/'local'/'file:///' modes, these daemons aren't run
(actually, no daemon is run at all), and hence there would be no 'web'
interface.

On Wed, Aug 10, 2011 at 6:01 PM, A Df <ab...@yahoo.com> wrote:
> Dear All:
>
> I know that in pseudo mode that there is a web interface for the NameNode and the JobTracker but where is it for the standalone operation? The Hadoop page at http://hadoop.apache.org/common/docs/current/single_node_setup.html just shows to run the jar example but how do you view job details? For example time to complete etc. I know it will not be as detailed as the other modes but I wanted to compare the job peformance in standalone vs pseudo mode. Thank you.
>
>
> Cheers,
> A Df
>



-- 
Harsh J