You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Ashish Umrani <as...@gmail.com> on 2013/07/23 14:55:55 UTC

New hadoop 1.2 single node installation giving problems

Hi There,

First of all, sorry if I am asking some stupid question.  Myself being new
to the Hadoop environment , am finding it a bit difficult to figure out why
its failing

I have installed hadoop 1.2, based on instructions given in the folllowing
link
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

All went well and I could do the start-all.sh and the jps command does show
all 5 process to be present.

However when I try to do

hadoop fs -ls

I get the following error

hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop fs
-ls
Warning: $HADOOP_HOME is deprecated.

13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
<property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
<property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
<property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
<property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
<property>
13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
<property>
ls: Cannot access .: No such file or directory.
hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$



Can someone help me figure out whats the issue in my installation


Regards
ashish

Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Maybe the conf file is missing or no privilege to access or there is
something wrong about the format of your conf files (hdfs-site, core-site,
mapred-site). You can double check them. Also probably the typo of the
<property></property> tag or something like that.


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>



-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Seems *hdfs-site.xml has no property tag.*


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>


-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Shekhar,

The problem was not in my building of the jar.  It was in fact in execution

I was running command

*hadoop -jar* <jar filename> <qualified class name> input output

The problem was with -jar.  It should be

*hadoop jar* <jar filename> <qualified class name> input output


Thanks for help once again

regards
ashish


On Tue, Jul 23, 2013 at 10:31 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination
>
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Jitendra, Som,
>>
>> Thanks.  Issue was in not having any file there.  Its working fine now.
>>
>> I am able to do -ls and could also do -mkdir and -put.
>>
>> Now is time to run the jar and apparently I am getting
>>
>> no main manifest attribute, in wc.jar
>>
>>
>> But I believe its because of maven pom file does not have the main class
>> entry.
>>
>> Which I go ahead and change the pom file and build it again, please let
>> me know if you guys think of some other reason.
>>
>> Once again this user group rocks.  I have never seen this quick a
>> response.
>>
>> Regards
>> ashish
>>
>>
>> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Try..
>>>
>>> *hadoop fs -ls /*
>>>
>>> **
>>> Thanks
>>>
>>>
>>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <ashish.umrani@gmail.com
>>> > wrote:
>>>
>>>> Thanks Jitendra, Bejoy and Yexi,
>>>>
>>>> I got past that.  And now the ls command says it can not access the
>>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>>> which directory and I missing permissions on.
>>>>
>>>> Any pointers?
>>>>
>>>> And once again, thanks a lot
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls*
>>>> *Warning: $HADOOP_HOME is deprecated.*
>>>> *
>>>> *
>>>> *ls: Cannot access .: No such file or directory.*
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> Please check <property></property>  in hdfs-site.xml.
>>>>>
>>>>> It is missing.
>>>>>
>>>>> Thanks.
>>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>>
>>>>>> core-site.xml
>>>>>> mapred-site.xml
>>>>>> hdfs-site.xml   and
>>>>>> hadoop-env.sh
>>>>>>
>>>>>>
>>>>>> I could not find any issues except that all params in the
>>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>>
>>>>>> If you have a quick minute can you please browse through these files
>>>>>> in email and let me know where could be the issue.
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am listing those files below.
>>>>>>  *core-site.xml *
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/app/hadoop/tmp</value>
>>>>>>     <description>A base for other temporary directories.</description>
>>>>>>   </property>
>>>>>>
>>>>>>   <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>hdfs://localhost:54310</value>
>>>>>>     <description>The name of the default file system.  A URI whose
>>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>>     uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>> naming
>>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>>> to
>>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *mapred-site.xml*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>mapred.job.tracker</name>
>>>>>>     <value>localhost:54311</value>
>>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>>     and reduce task.
>>>>>>     </description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hdfs-site.xml   and*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <name>dfs.replication</name>
>>>>>>   <value>1</value>
>>>>>>   <description>Default block replication.
>>>>>>     The actual number of replications can be specified when the file
>>>>>> is created.
>>>>>>     The default is used if replication is not specified in create
>>>>>> time.
>>>>>>   </description>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hadoop-env.sh*
>>>>>>  # Set Hadoop-specific environment variables here.
>>>>>>
>>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>>> # optional.  When running a distributed configuration it is best to
>>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>>> # remote nodes.
>>>>>>
>>>>>> # The java implementation to use.  Required.
>>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>>
>>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>>> # export HADOOP_CLASSPATH=
>>>>>>
>>>>>>
>>>>>> All pther params in hadoop-env.sh are commented
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>>> all the Conf files.
>>>>>>>
>>>>>>> Thanks
>>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi There,
>>>>>>>>
>>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>>> figure out why its failing
>>>>>>>>
>>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>>> folllowing link
>>>>>>>>
>>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>>
>>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>>> does show all 5 process to be present.
>>>>>>>>
>>>>>>>> However when I try to do
>>>>>>>>
>>>>>>>> hadoop fs -ls
>>>>>>>>
>>>>>>>> I get the following error
>>>>>>>>
>>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>> hadoop fs -ls
>>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> ashish
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Shekhar,

The problem was not in my building of the jar.  It was in fact in execution

I was running command

*hadoop -jar* <jar filename> <qualified class name> input output

The problem was with -jar.  It should be

*hadoop jar* <jar filename> <qualified class name> input output


Thanks for help once again

regards
ashish


On Tue, Jul 23, 2013 at 10:31 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination
>
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Jitendra, Som,
>>
>> Thanks.  Issue was in not having any file there.  Its working fine now.
>>
>> I am able to do -ls and could also do -mkdir and -put.
>>
>> Now is time to run the jar and apparently I am getting
>>
>> no main manifest attribute, in wc.jar
>>
>>
>> But I believe its because of maven pom file does not have the main class
>> entry.
>>
>> Which I go ahead and change the pom file and build it again, please let
>> me know if you guys think of some other reason.
>>
>> Once again this user group rocks.  I have never seen this quick a
>> response.
>>
>> Regards
>> ashish
>>
>>
>> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Try..
>>>
>>> *hadoop fs -ls /*
>>>
>>> **
>>> Thanks
>>>
>>>
>>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <ashish.umrani@gmail.com
>>> > wrote:
>>>
>>>> Thanks Jitendra, Bejoy and Yexi,
>>>>
>>>> I got past that.  And now the ls command says it can not access the
>>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>>> which directory and I missing permissions on.
>>>>
>>>> Any pointers?
>>>>
>>>> And once again, thanks a lot
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls*
>>>> *Warning: $HADOOP_HOME is deprecated.*
>>>> *
>>>> *
>>>> *ls: Cannot access .: No such file or directory.*
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> Please check <property></property>  in hdfs-site.xml.
>>>>>
>>>>> It is missing.
>>>>>
>>>>> Thanks.
>>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>>
>>>>>> core-site.xml
>>>>>> mapred-site.xml
>>>>>> hdfs-site.xml   and
>>>>>> hadoop-env.sh
>>>>>>
>>>>>>
>>>>>> I could not find any issues except that all params in the
>>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>>
>>>>>> If you have a quick minute can you please browse through these files
>>>>>> in email and let me know where could be the issue.
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am listing those files below.
>>>>>>  *core-site.xml *
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/app/hadoop/tmp</value>
>>>>>>     <description>A base for other temporary directories.</description>
>>>>>>   </property>
>>>>>>
>>>>>>   <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>hdfs://localhost:54310</value>
>>>>>>     <description>The name of the default file system.  A URI whose
>>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>>     uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>> naming
>>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>>> to
>>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *mapred-site.xml*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>mapred.job.tracker</name>
>>>>>>     <value>localhost:54311</value>
>>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>>     and reduce task.
>>>>>>     </description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hdfs-site.xml   and*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <name>dfs.replication</name>
>>>>>>   <value>1</value>
>>>>>>   <description>Default block replication.
>>>>>>     The actual number of replications can be specified when the file
>>>>>> is created.
>>>>>>     The default is used if replication is not specified in create
>>>>>> time.
>>>>>>   </description>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hadoop-env.sh*
>>>>>>  # Set Hadoop-specific environment variables here.
>>>>>>
>>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>>> # optional.  When running a distributed configuration it is best to
>>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>>> # remote nodes.
>>>>>>
>>>>>> # The java implementation to use.  Required.
>>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>>
>>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>>> # export HADOOP_CLASSPATH=
>>>>>>
>>>>>>
>>>>>> All pther params in hadoop-env.sh are commented
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>>> all the Conf files.
>>>>>>>
>>>>>>> Thanks
>>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi There,
>>>>>>>>
>>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>>> figure out why its failing
>>>>>>>>
>>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>>> folllowing link
>>>>>>>>
>>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>>
>>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>>> does show all 5 process to be present.
>>>>>>>>
>>>>>>>> However when I try to do
>>>>>>>>
>>>>>>>> hadoop fs -ls
>>>>>>>>
>>>>>>>> I get the following error
>>>>>>>>
>>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>> hadoop fs -ls
>>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> ashish
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Shekhar,

The problem was not in my building of the jar.  It was in fact in execution

I was running command

*hadoop -jar* <jar filename> <qualified class name> input output

The problem was with -jar.  It should be

*hadoop jar* <jar filename> <qualified class name> input output


Thanks for help once again

regards
ashish


On Tue, Jul 23, 2013 at 10:31 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination
>
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Jitendra, Som,
>>
>> Thanks.  Issue was in not having any file there.  Its working fine now.
>>
>> I am able to do -ls and could also do -mkdir and -put.
>>
>> Now is time to run the jar and apparently I am getting
>>
>> no main manifest attribute, in wc.jar
>>
>>
>> But I believe its because of maven pom file does not have the main class
>> entry.
>>
>> Which I go ahead and change the pom file and build it again, please let
>> me know if you guys think of some other reason.
>>
>> Once again this user group rocks.  I have never seen this quick a
>> response.
>>
>> Regards
>> ashish
>>
>>
>> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Try..
>>>
>>> *hadoop fs -ls /*
>>>
>>> **
>>> Thanks
>>>
>>>
>>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <ashish.umrani@gmail.com
>>> > wrote:
>>>
>>>> Thanks Jitendra, Bejoy and Yexi,
>>>>
>>>> I got past that.  And now the ls command says it can not access the
>>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>>> which directory and I missing permissions on.
>>>>
>>>> Any pointers?
>>>>
>>>> And once again, thanks a lot
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls*
>>>> *Warning: $HADOOP_HOME is deprecated.*
>>>> *
>>>> *
>>>> *ls: Cannot access .: No such file or directory.*
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> Please check <property></property>  in hdfs-site.xml.
>>>>>
>>>>> It is missing.
>>>>>
>>>>> Thanks.
>>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>>
>>>>>> core-site.xml
>>>>>> mapred-site.xml
>>>>>> hdfs-site.xml   and
>>>>>> hadoop-env.sh
>>>>>>
>>>>>>
>>>>>> I could not find any issues except that all params in the
>>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>>
>>>>>> If you have a quick minute can you please browse through these files
>>>>>> in email and let me know where could be the issue.
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am listing those files below.
>>>>>>  *core-site.xml *
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/app/hadoop/tmp</value>
>>>>>>     <description>A base for other temporary directories.</description>
>>>>>>   </property>
>>>>>>
>>>>>>   <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>hdfs://localhost:54310</value>
>>>>>>     <description>The name of the default file system.  A URI whose
>>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>>     uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>> naming
>>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>>> to
>>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *mapred-site.xml*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>mapred.job.tracker</name>
>>>>>>     <value>localhost:54311</value>
>>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>>     and reduce task.
>>>>>>     </description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hdfs-site.xml   and*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <name>dfs.replication</name>
>>>>>>   <value>1</value>
>>>>>>   <description>Default block replication.
>>>>>>     The actual number of replications can be specified when the file
>>>>>> is created.
>>>>>>     The default is used if replication is not specified in create
>>>>>> time.
>>>>>>   </description>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hadoop-env.sh*
>>>>>>  # Set Hadoop-specific environment variables here.
>>>>>>
>>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>>> # optional.  When running a distributed configuration it is best to
>>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>>> # remote nodes.
>>>>>>
>>>>>> # The java implementation to use.  Required.
>>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>>
>>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>>> # export HADOOP_CLASSPATH=
>>>>>>
>>>>>>
>>>>>> All pther params in hadoop-env.sh are commented
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>>> all the Conf files.
>>>>>>>
>>>>>>> Thanks
>>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi There,
>>>>>>>>
>>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>>> figure out why its failing
>>>>>>>>
>>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>>> folllowing link
>>>>>>>>
>>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>>
>>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>>> does show all 5 process to be present.
>>>>>>>>
>>>>>>>> However when I try to do
>>>>>>>>
>>>>>>>> hadoop fs -ls
>>>>>>>>
>>>>>>>> I get the following error
>>>>>>>>
>>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>> hadoop fs -ls
>>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> ashish
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Shekhar,

The problem was not in my building of the jar.  It was in fact in execution

I was running command

*hadoop -jar* <jar filename> <qualified class name> input output

The problem was with -jar.  It should be

*hadoop jar* <jar filename> <qualified class name> input output


Thanks for help once again

regards
ashish


On Tue, Jul 23, 2013 at 10:31 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination
>
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Jitendra, Som,
>>
>> Thanks.  Issue was in not having any file there.  Its working fine now.
>>
>> I am able to do -ls and could also do -mkdir and -put.
>>
>> Now is time to run the jar and apparently I am getting
>>
>> no main manifest attribute, in wc.jar
>>
>>
>> But I believe its because of maven pom file does not have the main class
>> entry.
>>
>> Which I go ahead and change the pom file and build it again, please let
>> me know if you guys think of some other reason.
>>
>> Once again this user group rocks.  I have never seen this quick a
>> response.
>>
>> Regards
>> ashish
>>
>>
>> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Try..
>>>
>>> *hadoop fs -ls /*
>>>
>>> **
>>> Thanks
>>>
>>>
>>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <ashish.umrani@gmail.com
>>> > wrote:
>>>
>>>> Thanks Jitendra, Bejoy and Yexi,
>>>>
>>>> I got past that.  And now the ls command says it can not access the
>>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>>> which directory and I missing permissions on.
>>>>
>>>> Any pointers?
>>>>
>>>> And once again, thanks a lot
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls*
>>>> *Warning: $HADOOP_HOME is deprecated.*
>>>> *
>>>> *
>>>> *ls: Cannot access .: No such file or directory.*
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi Ashish,
>>>>>
>>>>> Please check <property></property>  in hdfs-site.xml.
>>>>>
>>>>> It is missing.
>>>>>
>>>>> Thanks.
>>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>>
>>>>>> core-site.xml
>>>>>> mapred-site.xml
>>>>>> hdfs-site.xml   and
>>>>>> hadoop-env.sh
>>>>>>
>>>>>>
>>>>>> I could not find any issues except that all params in the
>>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>>
>>>>>> If you have a quick minute can you please browse through these files
>>>>>> in email and let me know where could be the issue.
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am listing those files below.
>>>>>>  *core-site.xml *
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>hadoop.tmp.dir</name>
>>>>>>     <value>/app/hadoop/tmp</value>
>>>>>>     <description>A base for other temporary directories.</description>
>>>>>>   </property>
>>>>>>
>>>>>>   <property>
>>>>>>     <name>fs.default.name</name>
>>>>>>     <value>hdfs://localhost:54310</value>
>>>>>>     <description>The name of the default file system.  A URI whose
>>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>>     uri's scheme determines the config property (fs.SCHEME.impl)
>>>>>> naming
>>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>>> to
>>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *mapred-site.xml*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <property>
>>>>>>     <name>mapred.job.tracker</name>
>>>>>>     <value>localhost:54311</value>
>>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>>     and reduce task.
>>>>>>     </description>
>>>>>>   </property>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hdfs-site.xml   and*
>>>>>>  <?xml version="1.0"?>
>>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>>
>>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>>
>>>>>> <configuration>
>>>>>>   <name>dfs.replication</name>
>>>>>>   <value>1</value>
>>>>>>   <description>Default block replication.
>>>>>>     The actual number of replications can be specified when the file
>>>>>> is created.
>>>>>>     The default is used if replication is not specified in create
>>>>>> time.
>>>>>>   </description>
>>>>>> </configuration>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *hadoop-env.sh*
>>>>>>  # Set Hadoop-specific environment variables here.
>>>>>>
>>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>>> # optional.  When running a distributed configuration it is best to
>>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>>> # remote nodes.
>>>>>>
>>>>>> # The java implementation to use.  Required.
>>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>>
>>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>>> # export HADOOP_CLASSPATH=
>>>>>>
>>>>>>
>>>>>> All pther params in hadoop-env.sh are commented
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>>> all the Conf files.
>>>>>>>
>>>>>>> Thanks
>>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi There,
>>>>>>>>
>>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>>> figure out why its failing
>>>>>>>>
>>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>>> folllowing link
>>>>>>>>
>>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>>
>>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>>> does show all 5 process to be present.
>>>>>>>>
>>>>>>>> However when I try to do
>>>>>>>>
>>>>>>>> hadoop fs -ls
>>>>>>>>
>>>>>>>> I get the following error
>>>>>>>>
>>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>> hadoop fs -ls
>>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>>> not <property>
>>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> ashish
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination


Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Jitendra, Som,
>
> Thanks.  Issue was in not having any file there.  Its working fine now.
>
> I am able to do -ls and could also do -mkdir and -put.
>
> Now is time to run the jar and apparently I am getting
>
> no main manifest attribute, in wc.jar
>
>
> But I believe its because of maven pom file does not have the main class
> entry.
>
> Which I go ahead and change the pom file and build it again, please let me
> know if you guys think of some other reason.
>
> Once again this user group rocks.  I have never seen this quick a response.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Try..
>>
>> *hadoop fs -ls /*
>>
>> **
>> Thanks
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination


Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Jitendra, Som,
>
> Thanks.  Issue was in not having any file there.  Its working fine now.
>
> I am able to do -ls and could also do -mkdir and -put.
>
> Now is time to run the jar and apparently I am getting
>
> no main manifest attribute, in wc.jar
>
>
> But I believe its because of maven pom file does not have the main class
> entry.
>
> Which I go ahead and change the pom file and build it again, please let me
> know if you guys think of some other reason.
>
> Once again this user group rocks.  I have never seen this quick a response.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Try..
>>
>> *hadoop fs -ls /*
>>
>> **
>> Thanks
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination


Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Jitendra, Som,
>
> Thanks.  Issue was in not having any file there.  Its working fine now.
>
> I am able to do -ls and could also do -mkdir and -put.
>
> Now is time to run the jar and apparently I am getting
>
> no main manifest attribute, in wc.jar
>
>
> But I believe its because of maven pom file does not have the main class
> entry.
>
> Which I go ahead and change the pom file and build it again, please let me
> know if you guys think of some other reason.
>
> Once again this user group rocks.  I have never seen this quick a response.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Try..
>>
>> *hadoop fs -ls /*
>>
>> **
>> Thanks
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
hadoop jar wc.jar <fully qualified driver name> inputdata outputdestination


Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Jitendra, Som,
>
> Thanks.  Issue was in not having any file there.  Its working fine now.
>
> I am able to do -ls and could also do -mkdir and -put.
>
> Now is time to run the jar and apparently I am getting
>
> no main manifest attribute, in wc.jar
>
>
> But I believe its because of maven pom file does not have the main class
> entry.
>
> Which I go ahead and change the pom file and build it again, please let me
> know if you guys think of some other reason.
>
> Once again this user group rocks.  I have never seen this quick a response.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Try..
>>
>> *hadoop fs -ls /*
>>
>> **
>> Thanks
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Jitendra, Som,

Thanks.  Issue was in not having any file there.  Its working fine now.

I am able to do -ls and could also do -mkdir and -put.

Now is time to run the jar and apparently I am getting

no main manifest attribute, in wc.jar


But I believe its because of maven pom file does not have the main class
entry.

Which I go ahead and change the pom file and build it again, please let me
know if you guys think of some other reason.

Once again this user group rocks.  I have never seen this quick a response.

Regards
ashish


On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <jeetuyadav200890@gmail.com
> wrote:

> Try..
>
> *hadoop fs -ls /*
>
> **
> Thanks
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Jitendra, Som,

Thanks.  Issue was in not having any file there.  Its working fine now.

I am able to do -ls and could also do -mkdir and -put.

Now is time to run the jar and apparently I am getting

no main manifest attribute, in wc.jar


But I believe its because of maven pom file does not have the main class
entry.

Which I go ahead and change the pom file and build it again, please let me
know if you guys think of some other reason.

Once again this user group rocks.  I have never seen this quick a response.

Regards
ashish


On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <jeetuyadav200890@gmail.com
> wrote:

> Try..
>
> *hadoop fs -ls /*
>
> **
> Thanks
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Jitendra, Som,

Thanks.  Issue was in not having any file there.  Its working fine now.

I am able to do -ls and could also do -mkdir and -put.

Now is time to run the jar and apparently I am getting

no main manifest attribute, in wc.jar


But I believe its because of maven pom file does not have the main class
entry.

Which I go ahead and change the pom file and build it again, please let me
know if you guys think of some other reason.

Once again this user group rocks.  I have never seen this quick a response.

Regards
ashish


On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <jeetuyadav200890@gmail.com
> wrote:

> Try..
>
> *hadoop fs -ls /*
>
> **
> Thanks
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Jitendra, Som,

Thanks.  Issue was in not having any file there.  Its working fine now.

I am able to do -ls and could also do -mkdir and -put.

Now is time to run the jar and apparently I am getting

no main manifest attribute, in wc.jar


But I believe its because of maven pom file does not have the main class
entry.

Which I go ahead and change the pom file and build it again, please let me
know if you guys think of some other reason.

Once again this user group rocks.  I have never seen this quick a response.

Regards
ashish


On Tue, Jul 23, 2013 at 10:21 AM, Jitendra Yadav <jeetuyadav200890@gmail.com
> wrote:

> Try..
>
> *hadoop fs -ls /*
>
> **
> Thanks
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Try..

*hadoop fs -ls /*

**
Thanks


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
> hadoop fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
After starting  i would suggest always check whether your NameNode and job
tracker UI are working or not and check the number of live nodes in both of
the UI..
Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:41 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks,
>
> But the issue was that there was no directory and hence it was not showing
> anything.  Adding a directory cleared the warning.
>
> I appreciate your help.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> Hello Ashish,
>>
>> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
After starting  i would suggest always check whether your NameNode and job
tracker UI are working or not and check the number of live nodes in both of
the UI..
Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:41 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks,
>
> But the issue was that there was no directory and hence it was not showing
> anything.  Adding a directory cleared the warning.
>
> I appreciate your help.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> Hello Ashish,
>>
>> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
After starting  i would suggest always check whether your NameNode and job
tracker UI are working or not and check the number of live nodes in both of
the UI..
Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:41 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks,
>
> But the issue was that there was no directory and hence it was not showing
> anything.  Adding a directory cleared the warning.
>
> I appreciate your help.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> Hello Ashish,
>>
>> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
After starting  i would suggest always check whether your NameNode and job
tracker UI are working or not and check the number of live nodes in both of
the UI..
Regards,
Som Shekhar Sharma
+91-8197243810


On Tue, Jul 23, 2013 at 10:41 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks,
>
> But the issue was that there was no directory and hence it was not showing
> anything.  Adding a directory cleared the warning.
>
> I appreciate your help.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com>wrote:
>
>> Hello Ashish,
>>
>> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
>>>>>   <value>1</value>
>>>>>   <description>Default block replication.
>>>>>     The actual number of replications can be specified when the file
>>>>> is created.
>>>>>     The default is used if replication is not specified in create time.
>>>>>   </description>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hadoop-env.sh*
>>>>>  # Set Hadoop-specific environment variables here.
>>>>>
>>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>>> # optional.  When running a distributed configuration it is best to
>>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>>> # remote nodes.
>>>>>
>>>>> # The java implementation to use.  Required.
>>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>>
>>>>> # Extra Java CLASSPATH elements.  Optional.
>>>>> # export HADOOP_CLASSPATH=
>>>>>
>>>>>
>>>>> All pther params in hadoop-env.sh are commented
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>>> jeetuyadav200890@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> You might have missed some configuration (XML tags ), Please check
>>>>>> all the Conf files.
>>>>>>
>>>>>> Thanks
>>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>>> ashish.umrani@gmail.com> wrote:
>>>>>>
>>>>>>> Hi There,
>>>>>>>
>>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>>> figure out why its failing
>>>>>>>
>>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>>> folllowing link
>>>>>>>
>>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>>
>>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>>> does show all 5 process to be present.
>>>>>>>
>>>>>>> However when I try to do
>>>>>>>
>>>>>>> hadoop fs -ls
>>>>>>>
>>>>>>> I get the following error
>>>>>>>
>>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>> hadoop fs -ls
>>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element
>>>>>>> not <property>
>>>>>>> ls: Cannot access .: No such file or directory.
>>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> ashish
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks,

But the issue was that there was no directory and hence it was not showing
anything.  Adding a directory cleared the warning.

I appreciate your help.

Regards
ashish


On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Ashish,
>
> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks,

But the issue was that there was no directory and hence it was not showing
anything.  Adding a directory cleared the warning.

I appreciate your help.

Regards
ashish


On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Ashish,
>
> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks,

But the issue was that there was no directory and hence it was not showing
anything.  Adding a directory cleared the warning.

I appreciate your help.

Regards
ashish


On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Ashish,
>
> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks,

But the issue was that there was no directory and hence it was not showing
anything.  Adding a directory cleared the warning.

I appreciate your help.

Regards
ashish


On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Ashish,
>
> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>
> Warm Regards,
> Tariq
> cloudfront.blogspot.com
>
>
> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Thanks Jitendra, Bejoy and Yexi,
>>
>> I got past that.  And now the ls command says it can not access the
>> directory.  I am sure this is a permissions issue.  I am just wondering
>> which directory and I missing permissions on.
>>
>> Any pointers?
>>
>> And once again, thanks a lot
>>
>> Regards
>> ashish
>>
>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls*
>> *Warning: $HADOOP_HOME is deprecated.*
>> *
>> *
>> *ls: Cannot access .: No such file or directory.*
>>
>>
>>
>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi Ashish,
>>>
>>> Please check <property></property>  in hdfs-site.xml.
>>>
>>> It is missing.
>>>
>>> Thanks.
>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>
>>>> core-site.xml
>>>> mapred-site.xml
>>>> hdfs-site.xml   and
>>>> hadoop-env.sh
>>>>
>>>>
>>>> I could not find any issues except that all params in the hadoop-env.sh
>>>> are commented out.  Only java_home is un commented.
>>>>
>>>> If you have a quick minute can you please browse through these files in
>>>> email and let me know where could be the issue.
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>>
>>>>
>>>> I am listing those files below.
>>>>  *core-site.xml *
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>hadoop.tmp.dir</name>
>>>>     <value>/app/hadoop/tmp</value>
>>>>     <description>A base for other temporary directories.</description>
>>>>   </property>
>>>>
>>>>   <property>
>>>>     <name>fs.default.name</name>
>>>>     <value>hdfs://localhost:54310</value>
>>>>     <description>The name of the default file system.  A URI whose
>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>     the FileSystem implementation class.  The uri's authority is used to
>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *mapred-site.xml*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <property>
>>>>     <name>mapred.job.tracker</name>
>>>>     <value>localhost:54311</value>
>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>     and reduce task.
>>>>     </description>
>>>>   </property>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hdfs-site.xml   and*
>>>>  <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>
>>>> <!-- Put site-specific property overrides in this file. -->
>>>>
>>>> <configuration>
>>>>   <name>dfs.replication</name>
>>>>   <value>1</value>
>>>>   <description>Default block replication.
>>>>     The actual number of replications can be specified when the file is
>>>> created.
>>>>     The default is used if replication is not specified in create time.
>>>>   </description>
>>>> </configuration>
>>>>
>>>>
>>>>
>>>> *hadoop-env.sh*
>>>>  # Set Hadoop-specific environment variables here.
>>>>
>>>> # The only required environment variable is JAVA_HOME.  All others are
>>>> # optional.  When running a distributed configuration it is best to
>>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>>> # remote nodes.
>>>>
>>>> # The java implementation to use.  Required.
>>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>>
>>>> # Extra Java CLASSPATH elements.  Optional.
>>>> # export HADOOP_CLASSPATH=
>>>>
>>>>
>>>> All pther params in hadoop-env.sh are commented
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>>> jeetuyadav200890@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> You might have missed some configuration (XML tags ), Please check all
>>>>> the Conf files.
>>>>>
>>>>> Thanks
>>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <
>>>>> ashish.umrani@gmail.com> wrote:
>>>>>
>>>>>> Hi There,
>>>>>>
>>>>>> First of all, sorry if I am asking some stupid question.  Myself
>>>>>> being new to the Hadoop environment , am finding it a bit difficult to
>>>>>> figure out why its failing
>>>>>>
>>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>>> folllowing link
>>>>>>
>>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>>
>>>>>> All went well and I could do the start-all.sh and the jps command
>>>>>> does show all 5 process to be present.
>>>>>>
>>>>>> However when I try to do
>>>>>>
>>>>>> hadoop fs -ls
>>>>>>
>>>>>> I get the following error
>>>>>>
>>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>> hadoop fs -ls
>>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>>> <property>
>>>>>> ls: Cannot access .: No such file or directory.
>>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>>
>>>>>>
>>>>>>
>>>>>> Can someone help me figure out whats the issue in my installation
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> ashish
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Ashish,

Change the permissions of /app/hadoop/tmp to 755 and see if it helps.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Ashish,

Change the permissions of /app/hadoop/tmp to 755 and see if it helps.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Ashish,

Change the permissions of /app/hadoop/tmp to 755 and see if it helps.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Try..

*hadoop fs -ls /*

**
Thanks


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
> hadoop fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Ashish,

Change the permissions of /app/hadoop/tmp to 755 and see if it helps.

Warm Regards,
Tariq
cloudfront.blogspot.com


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Try..

*hadoop fs -ls /*

**
Thanks


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
> hadoop fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Try..

*hadoop fs -ls /*

**
Thanks


On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <as...@gmail.com>wrote:

> Thanks Jitendra, Bejoy and Yexi,
>
> I got past that.  And now the ls command says it can not access the
> directory.  I am sure this is a permissions issue.  I am just wondering
> which directory and I missing permissions on.
>
> Any pointers?
>
> And once again, thanks a lot
>
> Regards
> ashish
>
>  *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
> hadoop fs -ls*
> *Warning: $HADOOP_HOME is deprecated.*
> *
> *
> *ls: Cannot access .: No such file or directory.*
>
>
>
> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi Ashish,
>>
>> Please check <property></property>  in hdfs-site.xml.
>>
>> It is missing.
>>
>> Thanks.
>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hey thanks for response.  I have changed 4 files during installation
>>>
>>> core-site.xml
>>> mapred-site.xml
>>> hdfs-site.xml   and
>>> hadoop-env.sh
>>>
>>>
>>> I could not find any issues except that all params in the hadoop-env.sh
>>> are commented out.  Only java_home is un commented.
>>>
>>> If you have a quick minute can you please browse through these files in
>>> email and let me know where could be the issue.
>>>
>>> Regards
>>> ashish
>>>
>>>
>>>
>>> I am listing those files below.
>>>  *core-site.xml *
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>hadoop.tmp.dir</name>
>>>     <value>/app/hadoop/tmp</value>
>>>     <description>A base for other temporary directories.</description>
>>>   </property>
>>>
>>>   <property>
>>>     <name>fs.default.name</name>
>>>     <value>hdfs://localhost:54310</value>
>>>     <description>The name of the default file system.  A URI whose
>>>     scheme and authority determine the FileSystem implementation.  The
>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>     the FileSystem implementation class.  The uri's authority is used to
>>>     determine the host, port, etc. for a filesystem.</description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *mapred-site.xml*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <property>
>>>     <name>mapred.job.tracker</name>
>>>     <value>localhost:54311</value>
>>>     <description>The host and port that the MapReduce job tracker runs
>>>     at.  If "local", then jobs are run in-process as a single map
>>>     and reduce task.
>>>     </description>
>>>   </property>
>>> </configuration>
>>>
>>>
>>>
>>> *hdfs-site.xml   and*
>>>  <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>
>>> <!-- Put site-specific property overrides in this file. -->
>>>
>>> <configuration>
>>>   <name>dfs.replication</name>
>>>   <value>1</value>
>>>   <description>Default block replication.
>>>     The actual number of replications can be specified when the file is
>>> created.
>>>     The default is used if replication is not specified in create time.
>>>   </description>
>>> </configuration>
>>>
>>>
>>>
>>> *hadoop-env.sh*
>>>  # Set Hadoop-specific environment variables here.
>>>
>>> # The only required environment variable is JAVA_HOME.  All others are
>>> # optional.  When running a distributed configuration it is best to
>>> # set JAVA_HOME in this file, so that it is correctly defined on
>>> # remote nodes.
>>>
>>> # The java implementation to use.  Required.
>>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>>
>>> # Extra Java CLASSPATH elements.  Optional.
>>> # export HADOOP_CLASSPATH=
>>>
>>>
>>> All pther params in hadoop-env.sh are commented
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>>> jeetuyadav200890@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> You might have missed some configuration (XML tags ), Please check all
>>>> the Conf files.
>>>>
>>>> Thanks
>>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <ashish.umrani@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi There,
>>>>>
>>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>>> why its failing
>>>>>
>>>>> I have installed hadoop 1.2, based on instructions given in the
>>>>> folllowing link
>>>>>
>>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>>
>>>>> All went well and I could do the start-all.sh and the jps command does
>>>>> show all 5 process to be present.
>>>>>
>>>>> However when I try to do
>>>>>
>>>>> hadoop fs -ls
>>>>>
>>>>> I get the following error
>>>>>
>>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>> hadoop fs -ls
>>>>> Warning: $HADOOP_HOME is deprecated.
>>>>>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>>> <property>
>>>>> ls: Cannot access .: No such file or directory.
>>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>>
>>>>>
>>>>>
>>>>> Can someone help me figure out whats the issue in my installation
>>>>>
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>
>>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Jitendra, Bejoy and Yexi,

I got past that.  And now the ls command says it can not access the
directory.  I am sure this is a permissions issue.  I am just wondering
which directory and I missing permissions on.

Any pointers?

And once again, thanks a lot

Regards
ashish

*hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
fs -ls*
*Warning: $HADOOP_HOME is deprecated.*
*
*
*ls: Cannot access .: No such file or directory.*



On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi Ashish,
>
> Please check <property></property>  in hdfs-site.xml.
>
> It is missing.
>
> Thanks.
> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>>  *core-site.xml *
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>>  # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Jitendra, Bejoy and Yexi,

I got past that.  And now the ls command says it can not access the
directory.  I am sure this is a permissions issue.  I am just wondering
which directory and I missing permissions on.

Any pointers?

And once again, thanks a lot

Regards
ashish

*hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
fs -ls*
*Warning: $HADOOP_HOME is deprecated.*
*
*
*ls: Cannot access .: No such file or directory.*



On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi Ashish,
>
> Please check <property></property>  in hdfs-site.xml.
>
> It is missing.
>
> Thanks.
> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>>  *core-site.xml *
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>>  # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Jitendra, Bejoy and Yexi,

I got past that.  And now the ls command says it can not access the
directory.  I am sure this is a permissions issue.  I am just wondering
which directory and I missing permissions on.

Any pointers?

And once again, thanks a lot

Regards
ashish

*hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
fs -ls*
*Warning: $HADOOP_HOME is deprecated.*
*
*
*ls: Cannot access .: No such file or directory.*



On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi Ashish,
>
> Please check <property></property>  in hdfs-site.xml.
>
> It is missing.
>
> Thanks.
> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>>  *core-site.xml *
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>>  # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Thanks Jitendra, Bejoy and Yexi,

I got past that.  And now the ls command says it can not access the
directory.  I am sure this is a permissions issue.  I am just wondering
which directory and I missing permissions on.

Any pointers?

And once again, thanks a lot

Regards
ashish

*hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
fs -ls*
*Warning: $HADOOP_HOME is deprecated.*
*
*
*ls: Cannot access .: No such file or directory.*



On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi Ashish,
>
> Please check <property></property>  in hdfs-site.xml.
>
> It is missing.
>
> Thanks.
> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>>  *core-site.xml *
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>>  <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>>  # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi Ashish,

Please check <property></property>  in hdfs-site.xml.

It is missing.

Thanks.
On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
>  *core-site.xml *
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
>  # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi Ashish,

Please check <property></property>  in hdfs-site.xml.

It is missing.

Thanks.
On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
>  *core-site.xml *
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
>  # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey,

Thanks Shekhar.  That worked like a chimp.  Appreciate help from you all.
 Now I will try to put files and try the word count or similar program.

Regards
ashish


On Tue, Jul 23, 2013 at 10:07 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> Its warning not error...
>
> Create a directory and then do ls ( In your case /user/hduser is not
> created untill and unless for the first time you create a directory or put
> some file)
>
> hadoop fs  -mkdir sample
>
> hadoop fs  -ls
>
> I would suggest if you are getting pemission problem,
> please check the following:
>
> (1) Have you run the command "hadoop namenode -format" with different user
> and you are accessing the hdfs with different user?
>
> On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:
>
>> **
>> Hi Ashish
>>
>> In your hdfs-site.xml within <configuration> tag you need to have the
>> <property> tag and inside a <property> tag you can have <name>,<value> and
>> <description> tags.
>>
>> Regards
>> Bejoy KS
>>
>> Sent from remote device, Please excuse typos
>> ------------------------------
>> *From: * Ashish Umrani <as...@gmail.com>
>> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
>> *To: *<us...@hadoop.apache.org>
>> *ReplyTo: * user@hadoop.apache.org
>> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>> *core-site.xml *
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>> # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey,

Thanks Shekhar.  That worked like a chimp.  Appreciate help from you all.
 Now I will try to put files and try the word count or similar program.

Regards
ashish


On Tue, Jul 23, 2013 at 10:07 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> Its warning not error...
>
> Create a directory and then do ls ( In your case /user/hduser is not
> created untill and unless for the first time you create a directory or put
> some file)
>
> hadoop fs  -mkdir sample
>
> hadoop fs  -ls
>
> I would suggest if you are getting pemission problem,
> please check the following:
>
> (1) Have you run the command "hadoop namenode -format" with different user
> and you are accessing the hdfs with different user?
>
> On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:
>
>> **
>> Hi Ashish
>>
>> In your hdfs-site.xml within <configuration> tag you need to have the
>> <property> tag and inside a <property> tag you can have <name>,<value> and
>> <description> tags.
>>
>> Regards
>> Bejoy KS
>>
>> Sent from remote device, Please excuse typos
>> ------------------------------
>> *From: * Ashish Umrani <as...@gmail.com>
>> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
>> *To: *<us...@hadoop.apache.org>
>> *ReplyTo: * user@hadoop.apache.org
>> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>> *core-site.xml *
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>> # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey,

Thanks Shekhar.  That worked like a chimp.  Appreciate help from you all.
 Now I will try to put files and try the word count or similar program.

Regards
ashish


On Tue, Jul 23, 2013 at 10:07 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> Its warning not error...
>
> Create a directory and then do ls ( In your case /user/hduser is not
> created untill and unless for the first time you create a directory or put
> some file)
>
> hadoop fs  -mkdir sample
>
> hadoop fs  -ls
>
> I would suggest if you are getting pemission problem,
> please check the following:
>
> (1) Have you run the command "hadoop namenode -format" with different user
> and you are accessing the hdfs with different user?
>
> On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:
>
>> **
>> Hi Ashish
>>
>> In your hdfs-site.xml within <configuration> tag you need to have the
>> <property> tag and inside a <property> tag you can have <name>,<value> and
>> <description> tags.
>>
>> Regards
>> Bejoy KS
>>
>> Sent from remote device, Please excuse typos
>> ------------------------------
>> *From: * Ashish Umrani <as...@gmail.com>
>> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
>> *To: *<us...@hadoop.apache.org>
>> *ReplyTo: * user@hadoop.apache.org
>> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>> *core-site.xml *
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>> # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey,

Thanks Shekhar.  That worked like a chimp.  Appreciate help from you all.
 Now I will try to put files and try the word count or similar program.

Regards
ashish


On Tue, Jul 23, 2013 at 10:07 AM, Shekhar Sharma <sh...@gmail.com>wrote:

> Its warning not error...
>
> Create a directory and then do ls ( In your case /user/hduser is not
> created untill and unless for the first time you create a directory or put
> some file)
>
> hadoop fs  -mkdir sample
>
> hadoop fs  -ls
>
> I would suggest if you are getting pemission problem,
> please check the following:
>
> (1) Have you run the command "hadoop namenode -format" with different user
> and you are accessing the hdfs with different user?
>
> On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:
>
>> **
>> Hi Ashish
>>
>> In your hdfs-site.xml within <configuration> tag you need to have the
>> <property> tag and inside a <property> tag you can have <name>,<value> and
>> <description> tags.
>>
>> Regards
>> Bejoy KS
>>
>> Sent from remote device, Please excuse typos
>> ------------------------------
>> *From: * Ashish Umrani <as...@gmail.com>
>> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
>> *To: *<us...@hadoop.apache.org>
>> *ReplyTo: * user@hadoop.apache.org
>> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>>
>> Hey thanks for response.  I have changed 4 files during installation
>>
>> core-site.xml
>> mapred-site.xml
>> hdfs-site.xml   and
>> hadoop-env.sh
>>
>>
>> I could not find any issues except that all params in the hadoop-env.sh
>> are commented out.  Only java_home is un commented.
>>
>> If you have a quick minute can you please browse through these files in
>> email and let me know where could be the issue.
>>
>> Regards
>> ashish
>>
>>
>>
>> I am listing those files below.
>> *core-site.xml *
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/app/hadoop/tmp</value>
>>     <description>A base for other temporary directories.</description>
>>   </property>
>>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://localhost:54310</value>
>>     <description>The name of the default file system.  A URI whose
>>     scheme and authority determine the FileSystem implementation.  The
>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>     the FileSystem implementation class.  The uri's authority is used to
>>     determine the host, port, etc. for a filesystem.</description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *mapred-site.xml*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>localhost:54311</value>
>>     <description>The host and port that the MapReduce job tracker runs
>>     at.  If "local", then jobs are run in-process as a single map
>>     and reduce task.
>>     </description>
>>   </property>
>> </configuration>
>>
>>
>>
>> *hdfs-site.xml   and*
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>
>> <!-- Put site-specific property overrides in this file. -->
>>
>> <configuration>
>>   <name>dfs.replication</name>
>>   <value>1</value>
>>   <description>Default block replication.
>>     The actual number of replications can be specified when the file is
>> created.
>>     The default is used if replication is not specified in create time.
>>   </description>
>> </configuration>
>>
>>
>>
>> *hadoop-env.sh*
>> # Set Hadoop-specific environment variables here.
>>
>> # The only required environment variable is JAVA_HOME.  All others are
>> # optional.  When running a distributed configuration it is best to
>> # set JAVA_HOME in this file, so that it is correctly defined on
>> # remote nodes.
>>
>> # The java implementation to use.  Required.
>> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>>
>> # Extra Java CLASSPATH elements.  Optional.
>> # export HADOOP_CLASSPATH=
>>
>>
>> All pther params in hadoop-env.sh are commented
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
>> jeetuyadav200890@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> You might have missed some configuration (XML tags ), Please check all
>>> the Conf files.
>>>
>>> Thanks
>>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>>
>>>> Hi There,
>>>>
>>>> First of all, sorry if I am asking some stupid question.  Myself being
>>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>>> why its failing
>>>>
>>>> I have installed hadoop 1.2, based on instructions given in the
>>>> folllowing link
>>>>
>>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>>
>>>> All went well and I could do the start-all.sh and the jps command does
>>>> show all 5 process to be present.
>>>>
>>>> However when I try to do
>>>>
>>>> hadoop fs -ls
>>>>
>>>> I get the following error
>>>>
>>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>> hadoop fs -ls
>>>> Warning: $HADOOP_HOME is deprecated.
>>>>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>>> <property>
>>>> ls: Cannot access .: No such file or directory.
>>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>>
>>>>
>>>>
>>>> Can someone help me figure out whats the issue in my installation
>>>>
>>>>
>>>> Regards
>>>> ashish
>>>>
>>>
>>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
Its warning not error...

Create a directory and then do ls ( In your case /user/hduser is not
created untill and unless for the first time you create a directory or put
some file)

hadoop fs  -mkdir sample

hadoop fs  -ls

I would suggest if you are getting pemission problem,
please check the following:

(1) Have you run the command "hadoop namenode -format" with different user
and you are accessing the hdfs with different user?

On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:

> **
> Hi Ashish
>
> In your hdfs-site.xml within <configuration> tag you need to have the
> <property> tag and inside a <property> tag you can have <name>,<value> and
> <description> tags.
>
> Regards
> Bejoy KS
>
> Sent from remote device, Please excuse typos
> ------------------------------
> *From: * Ashish Umrani <as...@gmail.com>
> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
> *To: *<us...@hadoop.apache.org>
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>
> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
Its warning not error...

Create a directory and then do ls ( In your case /user/hduser is not
created untill and unless for the first time you create a directory or put
some file)

hadoop fs  -mkdir sample

hadoop fs  -ls

I would suggest if you are getting pemission problem,
please check the following:

(1) Have you run the command "hadoop namenode -format" with different user
and you are accessing the hdfs with different user?

On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:

> **
> Hi Ashish
>
> In your hdfs-site.xml within <configuration> tag you need to have the
> <property> tag and inside a <property> tag you can have <name>,<value> and
> <description> tags.
>
> Regards
> Bejoy KS
>
> Sent from remote device, Please excuse typos
> ------------------------------
> *From: * Ashish Umrani <as...@gmail.com>
> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
> *To: *<us...@hadoop.apache.org>
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>
> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
Its warning not error...

Create a directory and then do ls ( In your case /user/hduser is not
created untill and unless for the first time you create a directory or put
some file)

hadoop fs  -mkdir sample

hadoop fs  -ls

I would suggest if you are getting pemission problem,
please check the following:

(1) Have you run the command "hadoop namenode -format" with different user
and you are accessing the hdfs with different user?

On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:

> **
> Hi Ashish
>
> In your hdfs-site.xml within <configuration> tag you need to have the
> <property> tag and inside a <property> tag you can have <name>,<value> and
> <description> tags.
>
> Regards
> Bejoy KS
>
> Sent from remote device, Please excuse typos
> ------------------------------
> *From: * Ashish Umrani <as...@gmail.com>
> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
> *To: *<us...@hadoop.apache.org>
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>
> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Shekhar Sharma <sh...@gmail.com>.
Its warning not error...

Create a directory and then do ls ( In your case /user/hduser is not
created untill and unless for the first time you create a directory or put
some file)

hadoop fs  -mkdir sample

hadoop fs  -ls

I would suggest if you are getting pemission problem,
please check the following:

(1) Have you run the command "hadoop namenode -format" with different user
and you are accessing the hdfs with different user?

On Tue, Jul 23, 2013 at 10:10 PM, <be...@gmail.com> wrote:

> **
> Hi Ashish
>
> In your hdfs-site.xml within <configuration> tag you need to have the
> <property> tag and inside a <property> tag you can have <name>,<value> and
> <description> tags.
>
> Regards
> Bejoy KS
>
> Sent from remote device, Please excuse typos
> ------------------------------
> *From: * Ashish Umrani <as...@gmail.com>
> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
> *To: *<us...@hadoop.apache.org>
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>
> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by be...@gmail.com.
Hi Ashish

In your hdfs-site.xml within  <configuration> tag you need to have the <property> tag and inside a <property> tag you can have <name>,<value> and <description> tags.


Regards 
Bejoy KS

Sent from remote device, Please excuse typos

-----Original Message-----
From: Ashish Umrani <as...@gmail.com>
Date: Tue, 23 Jul 2013 09:28:00 
To: <us...@hadoop.apache.org>
Reply-To: user@hadoop.apache.org
Subject: Re: New hadoop 1.2 single node installation giving problems

Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>


Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Seems *hdfs-site.xml has no property tag.*


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>


-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by be...@gmail.com.
Hi Ashish

In your hdfs-site.xml within  <configuration> tag you need to have the <property> tag and inside a <property> tag you can have <name>,<value> and <description> tags.


Regards 
Bejoy KS

Sent from remote device, Please excuse typos

-----Original Message-----
From: Ashish Umrani <as...@gmail.com>
Date: Tue, 23 Jul 2013 09:28:00 
To: <us...@hadoop.apache.org>
Reply-To: user@hadoop.apache.org
Subject: Re: New hadoop 1.2 single node installation giving problems

Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>


Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Seems *hdfs-site.xml has no property tag.*


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>


-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by be...@gmail.com.
Hi Ashish

In your hdfs-site.xml within  <configuration> tag you need to have the <property> tag and inside a <property> tag you can have <name>,<value> and <description> tags.


Regards 
Bejoy KS

Sent from remote device, Please excuse typos

-----Original Message-----
From: Ashish Umrani <as...@gmail.com>
Date: Tue, 23 Jul 2013 09:28:00 
To: <us...@hadoop.apache.org>
Reply-To: user@hadoop.apache.org
Subject: Re: New hadoop 1.2 single node installation giving problems

Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>


Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi Ashish,

Please check <property></property>  in hdfs-site.xml.

It is missing.

Thanks.
On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
>  *core-site.xml *
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
>  # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Seems *hdfs-site.xml has no property tag.*


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>


-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by be...@gmail.com.
Hi Ashish

In your hdfs-site.xml within  <configuration> tag you need to have the <property> tag and inside a <property> tag you can have <name>,<value> and <description> tags.


Regards 
Bejoy KS

Sent from remote device, Please excuse typos

-----Original Message-----
From: Ashish Umrani <as...@gmail.com>
Date: Tue, 23 Jul 2013 09:28:00 
To: <us...@hadoop.apache.org>
Reply-To: user@hadoop.apache.org
Subject: Re: New hadoop 1.2 single node installation giving problems

Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>


Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi Ashish,

Please check <property></property>  in hdfs-site.xml.

It is missing.

Thanks.
On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
>  *core-site.xml *
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
>  <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
>  # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200890@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Ashish Umrani <as...@gmail.com>.
Hey thanks for response.  I have changed 4 files during installation

core-site.xml
mapred-site.xml
hdfs-site.xml   and
hadoop-env.sh


I could not find any issues except that all params in the hadoop-env.sh are
commented out.  Only java_home is un commented.

If you have a quick minute can you please browse through these files in
email and let me know where could be the issue.

Regards
ashish



I am listing those files below.
*core-site.xml *
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
    <description>A base for other temporary directories.</description>
  </property>

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    <description>The name of the default file system.  A URI whose
    scheme and authority determine the FileSystem implementation.  The
    uri's scheme determines the config property (fs.SCHEME.impl) naming
    the FileSystem implementation class.  The uri's authority is used to
    determine the host, port, etc. for a filesystem.</description>
  </property>
</configuration>



*mapred-site.xml*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>
    <description>The host and port that the MapReduce job tracker runs
    at.  If "local", then jobs are run in-process as a single map
    and reduce task.
    </description>
  </property>
</configuration>



*hdfs-site.xml   and*
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
    The actual number of replications can be specified when the file is
created.
    The default is used if replication is not specified in create time.
  </description>
</configuration>



*hadoop-env.sh*
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=


All pther params in hadoop-env.sh are commented








On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav
<je...@gmail.com>wrote:

> Hi,
>
> You might have missed some configuration (XML tags ), Please check all the
> Conf files.
>
> Thanks
> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:
>
>> Hi There,
>>
>> First of all, sorry if I am asking some stupid question.  Myself being
>> new to the Hadoop environment , am finding it a bit difficult to figure out
>> why its failing
>>
>> I have installed hadoop 1.2, based on instructions given in the
>> folllowing link
>>
>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>
>> All went well and I could do the start-all.sh and the jps command does
>> show all 5 process to be present.
>>
>> However when I try to do
>>
>> hadoop fs -ls
>>
>> I get the following error
>>
>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>> hadoop fs -ls
>> Warning: $HADOOP_HOME is deprecated.
>>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>> <property>
>> ls: Cannot access .: No such file or directory.
>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>
>>
>>
>> Can someone help me figure out whats the issue in my installation
>>
>>
>> Regards
>> ashish
>>
>
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You might have missed some configuration (XML tags ), Please check all the
Conf files.

Thanks
On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Maybe the conf file is missing or no privilege to access or there is
something wrong about the format of your conf files (hdfs-site, core-site,
mapred-site). You can double check them. Also probably the typo of the
<property></property> tag or something like that.


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>



-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Maybe the conf file is missing or no privilege to access or there is
something wrong about the format of your conf files (hdfs-site, core-site,
mapred-site). You can double check them. Also probably the typo of the
<property></property> tag or something like that.


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>



-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You might have missed some configuration (XML tags ), Please check all the
Conf files.

Thanks
On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You might have missed some configuration (XML tags ), Please check all the
Conf files.

Thanks
On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>

Re: New hadoop 1.2 single node installation giving problems

Posted by Yexi Jiang <ye...@gmail.com>.
Maybe the conf file is missing or no privilege to access or there is
something wrong about the format of your conf files (hdfs-site, core-site,
mapred-site). You can double check them. Also probably the typo of the
<property></property> tag or something like that.


2013/7/23 Ashish Umrani <as...@gmail.com>

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>



-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

Re: New hadoop 1.2 single node installation giving problems

Posted by Jitendra Yadav <je...@gmail.com>.
Hi,

You might have missed some configuration (XML tags ), Please check all the
Conf files.

Thanks
On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani <as...@gmail.com>wrote:

> Hi There,
>
> First of all, sorry if I am asking some stupid question.  Myself being new
> to the Hadoop environment , am finding it a bit difficult to figure out why
> its failing
>
> I have installed hadoop 1.2, based on instructions given in the folllowing
> link
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> All went well and I could do the start-all.sh and the jps command does
> show all 5 process to be present.
>
> However when I try to do
>
> hadoop fs -ls
>
> I get the following error
>
>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$ hadoop
> fs -ls
> Warning: $HADOOP_HOME is deprecated.
>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
> <property>
> ls: Cannot access .: No such file or directory.
> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>
>
>
> Can someone help me figure out whats the issue in my installation
>
>
> Regards
> ashish
>