You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "ajames (JIRA)" <ji...@apache.org> on 2012/10/25 03:50:12 UTC

[jira] [Created] (HDFS-4109) ?Formatting HDFS running into errors :( - Many thanks

ajames created HDFS-4109:
----------------------------

             Summary: ?Formatting HDFS running into errors :( - Many thanks 
                 Key: HDFS-4109
                 URL: https://issues.apache.org/jira/browse/HDFS-4109
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs client
    Affects Versions: 0.20.2
         Environment: Windows 7
Cygwin installed
downloaded hadoop-0.20.2.tar (apparently works best with Win 7?)
            Reporter: ajames


Hi,

I am trying to format the Hadoop file system with:

bin/hadoop namenode -format

But I received this error in Cygwin:

/home/anjames/bin/../conf/hadoop-env.sh: line 8: $’\r’: command not found
/home/anjames/bin/../conf/hadoop-env.sh: line 14: $’\r’: command not found
/home/anjames/bin/../conf/hadoop-env.sh: line 17: $’\r’: command not found
/home/anjames/bin/../conf/hadoop-env.sh: line 25: $’\r’: command not found
/bin/java; No  such file or directoryjre7
/bin/java; No  such file or directoryjre7
/bin/java; cannot execute: No such file or directory

I had previous modified the following conf files the cygwin/home/anjames directory
1. core-site.xml 
2. mapred-site.xml 
3. hdfs-site.xml 

4. hadoop-env.sh

-I updated this file using the instructions: "uncomment the JAVA_HOME export command, and set the path to your Java home (typically C:/Program Files/Java/{java-home}"

i.e. In the "hadoop-env.sh" file, I took out the "#" infront of JAVA_HOME comment and changed the path as follows:

export JAVA_HOME=C:\Progra~1\Java\jre7



The hadoop-env.sh file is now:

----------------------------------------------------------------

# Set Hadoop-specific environment variables here.


# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.


# The java implementation to use.  
export JAVA_HOME=C:\Progra~1\Java\jre7 ###<-----uncommented and revised code

# Extra Java CLASSPATH elements.  Optional.

# export HADOOP_CLASSPATH=


# The maximum amount of heap to use, in MB. Default is 1000.

# export HADOOP_HEAPSIZE=2000

# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server


# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export JAVA_HOME=C:\Progra~1\Java\jre7
HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS

# Extra ssh options.  Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

# Where log files are stored.  $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

# host:path where hadoop code should be rsync'd from.  Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1

# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids

# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HADOOP_NICENESS=10


------------------------------

I'm trying to get back in the programming swing with a Big Data Analytics course, so any help is much appreciated its been a while, many thanks. 



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Resolved] (HDFS-4109) ?Formatting HDFS running into errors :( - Many thanks

Posted by "Suresh Srinivas (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HDFS-4109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Suresh Srinivas resolved HDFS-4109.
-----------------------------------

    Resolution: Invalid
    
> ?Formatting HDFS running into errors :( - Many thanks 
> ------------------------------------------------------
>
>                 Key: HDFS-4109
>                 URL: https://issues.apache.org/jira/browse/HDFS-4109
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.20.2
>         Environment: Windows 7
> Cygwin installed
> downloaded hadoop-0.20.2.tar (apparently works best with Win 7?)
>            Reporter: ajames
>              Labels: file, format, help, system
>
> Hi,
> I am trying to format the Hadoop file system with:
> bin/hadoop namenode -format
> But I received this error in Cygwin:
> /home/anjames/bin/../conf/hadoop-env.sh: line 8: $’\r’: command not found
> /home/anjames/bin/../conf/hadoop-env.sh: line 14: $’\r’: command not found
> /home/anjames/bin/../conf/hadoop-env.sh: line 17: $’\r’: command not found
> /home/anjames/bin/../conf/hadoop-env.sh: line 25: $’\r’: command not found
> /bin/java; No  such file or directoryjre7
> /bin/java; No  such file or directoryjre7
> /bin/java; cannot execute: No such file or directory
> I had previous modified the following conf files the cygwin/home/anjames directory
> 1. core-site.xml 
> 2. mapred-site.xml 
> 3. hdfs-site.xml 
> 4. hadoop-env.sh
> -I updated this file using the instructions: "uncomment the JAVA_HOME export command, and set the path to your Java home (typically C:/Program Files/Java/{java-home}"
> i.e. In the "hadoop-env.sh" file, I took out the "#" infront of JAVA_HOME comment and changed the path as follows:
> export JAVA_HOME=C:\Progra~1\Java\jre7
> The hadoop-env.sh file is now:
> ----------------------------------------------------------------
> # Set Hadoop-specific environment variables here.
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
> # The java implementation to use.  
> export JAVA_HOME=C:\Progra~1\Java\jre7 ###<-----uncommented and revised code
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
> # The maximum amount of heap to use, in MB. Default is 1000.
> # export HADOOP_HEAPSIZE=2000
> # Extra Java runtime options.  Empty by default.
> # export HADOOP_OPTS=-server
> # Command specific options appended to HADOOP_OPTS when specified
> export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
> export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
> export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
> export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
> export JAVA_HOME=C:\Progra~1\Java\jre7
> HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
> # export HADOOP_TASKTRACKER_OPTS=
> # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
> # export HADOOP_CLIENT_OPTS
> # Extra ssh options.  Empty by default.
> # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
> # Where log files are stored.  $HADOOP_HOME/logs by default.
> # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
> # File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.
> # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
> # host:path where hadoop code should be rsync'd from.  Unset by default.
> # export HADOOP_MASTER=master:/home/$USER/src/hadoop
> # Seconds to sleep between slave commands.  Unset by default.  This
> # can be useful in large clusters, where, e.g., slave rsyncs can
> # otherwise arrive faster than the master can service them.
> # export HADOOP_SLAVE_SLEEP=0.1
> # The directory where pid files are stored. /tmp by default.
> # export HADOOP_PID_DIR=/var/hadoop/pids
> # A string representing this instance of hadoop. $USER by default.
> # export HADOOP_IDENT_STRING=$USER
> # The scheduling priority for daemon processes.  See 'man nice'.
> # export HADOOP_NICENESS=10
> ------------------------------
> I'm trying to get back in the programming swing with a Big Data Analytics course, so any help is much appreciated its been a while, many thanks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira