You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by Sonal Goyal <so...@gmail.com> on 2011/03/13 11:45:55 UTC

Re: problem setup hadoop with nutch

You should use stop-all to stop the processes, then verify using ps -ef if
there are extra processes running and kill them. Then restart everything.

Thanks and Regards,
Sonal
<https://github.com/sonalgoyal/hiho>Hadoop ETL and Data
Integration<https://github.com/sonalgoyal/hiho>
Nube Technologies <http://www.nubetech.co>

<http://in.linkedin.com/in/sonalgoyal>





2011/3/13 Abdulelah almubarak <al...@w.cn>

>
>
> Hi every body
>
> i have some problem when i set up  hadoop
>
> i have 1 master and 3 slave
>
> my problem TaskTracker appear on slave  while running $bin/start-all.sh on
> master , Then disappears when $bin/start-all.sh  are finished.
>
> my configuration file :
>
>
> _______________________________________________________________________________________
>
> core-site.xml for master
>
>  GNU nano 2.2.4                                        File:
> conf/core-site.xml
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://naba01:54310</value>
>    <description>
>       Where to find the Hadoop Filesystem through the network.
>       Note 9000 is not the default port.
>       (This is slightly changed from previous versions which didnt have
> "hdfs")
>    </description>
>  </property>
> </configuration>
>
> ______________________________________________________________
>
>
> hdfs-site.xml for master
>
>  GNU nano 2.2.4                                        File:
> conf/core-site.xml
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://naba01:54310</value>
>    <description>
>       Where to find the Hadoop Filesystem through the network.
>       Note 9000 is not the default port.
>       (This is slightly changed from previous versions which didnt have
> "hdfs")
>    </description>
>  </property>
> </configuration>
>
>
> _________________________________________________________________________________________________________
>
> maperd-sitee.xml for master
>
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
> <property>
>  <name>mapred.job.tracker</name>
>  <value>naba01:54311</value>
>  <description>
>    The host and port that the MapReduce job tracker runs at. If
>    "local", then jobs are run in-process as a single map and
>    reduce task.
>    Note 9001 is not the default port.
>  </description>
> </property>
>
> <property>
>  <name>mapred.map.tasks</name>
>  <value>40</value>
>  <description>
>    define mapred.map tasks to be number of slave hosts
>  </description>
> </property>
>
> <property>
>  <name>mapred.reduce.tasks</name>
>  <value>8</value>
>  <description>
>    define mapred.reduce tasks to be number of slave hosts
>  </description>
> </property>
>
> <property>
>  <name>mapred.system.dir</name>
>  <value>/home/naba/fs0/mapreduce/system</value>
> </property>
>
> <property>
>  <name>mapred.local.dir</name>
>  <value>/home/naba/fs0/mapreduce/local</value>
> </property>
>
> </configuration>
>
> ________________________________________________________________________________________________________
>
> when i do bin/start-all on master
>
> naba@naba01:~/nutch-1.2$ bin/start-all.sh
> starting namenode, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-namenode-naba01.out
> naba01: starting datanode, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-datanode-naba01.out
> naba04: starting datanode, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-datanode-naba04.out
> naba03: starting datanode, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-datanode-naba03.out
> naba02: starting datanode, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-datanode-naba02.out
> naba01: starting secondarynamenode, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-secondarynamenode-naba01.out
> starting jobtracker, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-jobtracker-naba01.out
> naba04: starting tasktracker, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-tasktracker-naba04.out
> naba02: starting tasktracker, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-tasktracker-naba02.out
> naba01: starting tasktracker, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-tasktracker-naba01.out
> naba03: starting tasktracker, logging to
> /home/naba/nutch-1.2/bin/../logs/hadoop-naba-tasktracker-naba03.out
>
> _________________________________________________________
>
> $jps
> on master
>
> naba@naba01:~/nutch-1.2$ jps
> 14661 FsShell
> 28390 TaskTracker
> 28153 SecondaryNameNode
> 28016 DataNode
> 28256 JobTracker
> 27871 NameNode
> 28816 Jps
>
> __________________________________________________________
>
> $jps
> on slave while running $bin/start-all.sh on master
>
> naba@naba02:~/nutch-1.2$ jps
> 10246 Jps
> 10145 TaskTracker
> 10004 DataNode
> naba@naba02:~/nutch-1.2$ jps
> 10274 Jps
> 10145 TaskTracker
> 10004 DataNode
> naba@naba02:~/nutch-1.2$ jps
> 10311 Jps
> 10004 DataNode
> naba@naba02:~/nutch-1.2$ jps
> 10346 Jps
> naba@naba02:~/nutch-1.2$ jps
> 10374 Jps
>
>
> my problem TaskTracker appear on slave  while running $bin/start-all.sh on
> master , Then disappears when $bin/start-all.sh  are finished.
>
>
> master
>  logs/hadoop-naba-tasktracker-naba01.log
>
> 2011-03-13 12:14:00,869 INFO  mortbay.log - Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-03-13 12:14:00,998 INFO  mortbay.log - jetty-6.1.14
> 2011-03-13 12:19:21,013 INFO  mortbay.log - Started
> SelectChannelConnector@0.0.0.0:50060
> 2011-03-13 12:27:01,487 WARN  mapred.TaskTracker - TaskTracker's
> totalMemoryAllottedForTasks is -1. TaskMemoryManager is disabled.
> 2011-03-13 13:16:23,601 INFO  mortbay.log - Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-03-13 13:16:23,730 INFO  mortbay.log - jetty-6.1.14
>
>
> Slave
> logs/hadoop-naba-tasktracker-naba03.log
> 2011-03-13 12:16:16,473 INFO  mortbay.log - Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-03-13 12:16:16,600 ERROR mapred.TaskTracker - Can not start task
> tracker because java.net.BindException: Address already in use
>        at sun.nio.ch.Net.bind(Native Method)
>        at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
>        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>        at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
>        at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:931)
>        at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
>
> 2011-03-13 13:18:40,086 INFO  mortbay.log - Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2011-03-13 13:18:40,229 ERROR mapred.TaskTracker - Can not start task
> tracker because java.net.BindException: Address already in use
>        at sun.nio.ch.Net.bind(Native Method)
>        at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
>        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>        at
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>        at org.apache.hadoop.http.HttpServer.start(HttpServer.java:425)
>        at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:931)
>        at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
>
>
> Regards
> Almubarak
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>