You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Mahdi Bazargani <ma...@yahoo.com.INVALID> on 2015/05/23 14:11:06 UTC

Poroblem with hadoop appache configuration error connection refused

Dear Hadoop users,

I am trying to configure hadoop on my windows seven using Cygwin on just one node which named master and slave to localhost
My configuration files are as bellow:
==================hdf-site<configuration> <property>  <name>hadoop.tmp.dir</name>  <value>/home/bazargan/hadoop-1.2.1/tmp</value>  <description>A base for other temporary directories.</description></property> <property>  <name>dfs.data.dir</name>  <value>/home/bazargan/data</value>  <description>Comma separated list of paths on the local filesystem of aDataNode where it should store its blocks. </description></property><property><name>dfs.name.dir</name><value>/home/bazargan/name</value> </property> <property>  <name>dfs.default.name</name>  <value>hdfs://localhost:54310</value>  <description>The name of the default file system.  A URI whose  scheme and authority determine the FileSystem implementation.  The  uri's scheme determines the config property (fs.SCHEME.impl) naming  the FileSystem implementation class.  The uri's authority is used to  determine the host, port, etc. for a filesystem.</description></property> <property>  <name>dfs.replication</name>  <value>3</value>  <description>Default block replication.  The actual number of replications can be specified when the file is created.  The default is used if replication is not specified in create time.  </description></property>
</configuration>

=======
core-site.xml
<configuration> <property>  <name>hadoop.tmp.dir</name>  <value>/home/bazargan/hadoop-1.2.1/hdfs-tmp</value>  <description>A base for other temporary directories.</description></property> <property>  <name>fs.default.name</name>  <value>hdfs://localhost:54310</value>  <description>The name of the default file system.  A URI whose  scheme and authority determine the FileSystem implementation.  The  uri's scheme determines the config property (fs.SCHEME.impl) naming  the FileSystem implementation class.  The uri's authority is used to  determine the host, port, etc. for a filesystem.</description></property> </configuration>======================mapred-site<configuration><property>  <name>mapred.local.dir</name>  <value>/home/bazargan/hadoop-1.2.1/mapred-tmp</value>  <description>Comma-separated list of paths on the local  filesystem where temporary Map/Reduce data is written.</description></property><property>  <name>mapred.job.tracker</name>  <value>localhost:54311</value>  <description>The host and port that the MapReduce job tracker runs  at.  If "local", then jobs are run in-process as a single map  and reduce task.</description></property></configuration>===============================haddop env# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME.  All others are# optional.  When running a distributed configuration it is best to# set JAVA_HOME in this file, so that it is correctly defined on# remote nodes.
# The java implementation to use.  Required.# export JAVA_HOME=/cygdrive/c/"Program Files"/Java/jdk1.7.0_25
# Extra Java CLASSPATH elements.  Optional.# export HADOOP_CLASSPATH=
# The maximum amount of heap to use, in MB. Default is 1000.# export HADOOP_HEAPSIZE=2000
# Extra Java runtime options.  Empty by default.# export HADOOP_OPTS=-server
# Command specific options appended to HADOOP_OPTS when specifiedexport HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"# export HADOOP_TASKTRACKER_OPTS=# The following applies to multiple commands (fs, dfs, fsck, distcp etc)# export HADOOP_CLIENT_OPTS
# Extra ssh options.  Empty by default.# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"
# Where log files are stored.  $HADOOP_HOME/logs by default.# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
# File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by default.# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
# host:path where hadoop code should be rsync'd from.  Unset by default.# export HADOOP_MASTER=master:/home/$USER/src/hadoop
# Seconds to sleep between slave commands.  Unset by default.  This# can be useful in large clusters, where, e.g., slave rsyncs can# otherwise arrive faster than the master can service them.# export HADOOP_SLAVE_SLEEP=0.1
# The directory where pid files are stored. /tmp by default.# NOTE: this should be set to a directory that can only be written to by #       the users that are going to run the hadoop daemons.  Otherwise there is#       the potential for a symlink attack.# export HADOOP_PID_DIR=/var/hadoop/pids
# A string representing this instance of hadoop. $USER by default.# export HADOOP_IDENT_STRING=$USER
# The scheduling priority for daemon processes.  See 'man nice'.# export HADOOP_NICENESS=10==================


when I try to access lochost:50070 it works perfectly but when I want to access localhost:50030 I got connection refused error.

I really don't know what is causing the error (in log files I also dont see any error for example below is my log of datanode..ulimit -a for user Mahdicore file size          (blocks, -c) unlimiteddata seg size           (kbytes, -d) unlimitedfile size               (blocks, -f) unlimitedopen files                      (-n) 256pipe size            (512 bytes, -p) 8stack size              (kbytes, -s) 2032cpu time               (seconds, -t) unlimitedmax user processes              (-u) 256virtual memory          (kbytes, -v) unlimited

I also turned off my firewall.
I really appreciate if anyone can help me.
Thanks.