You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@nutch.apache.org by yl...@ifrance.com, yl...@ifrance.com on 2007/01/19 19:05:52 UTC

Input directory urls/url-fr.txt in localhost:9000 is invalid with Hadoop 0.4.0patched and Nutch 0.8.1

Hello,

I can not index website with Nutch and Hadoop.
I spend 15 days to try that nutch 0.8.1 work but with no success.

I use :
* jdk1.5.0_10 or 1.4.2_13 (i have the same problem with these 2 JDK)
* Nutch-0.8.1
* Hadoop-0.4.0 from Nutch-0.8.1
And I made configuration with http://wiki.apache.org/nutch/NutchHadoopTutorial
I have ONLY one server.

I start Hadoop-0.4.0 (start-all.sh) with no errors in the logs.

The directories for crawl and url are created.
I use dfs put command line to put them in NDFS File System.

[nutch-0.8.1]$ bin/hadoop dfs -lsr
/user/webadm/crawls���� <dir>
/user/webadm/urls������ <dir>
/user/webadm/urls/url-fr.txt������� <r 1>�� 44

And when I crawl with nutch 0.8.1, I have this message error:

[nutch-0.8.1]$ bin/nutch crawl urls/url-fr.txt� -dir crawls/crawl-fr -depth 10� -topN 50
crawl started in: crawls/crawl-fr
rootUrlDir = urls/url-fr.txt
threads = 10
depth = 10
topN = 50
Injector: starting
Injector: crawlDb: crawls/crawl-fr/crawldb
Injector: urlDir: urls/url-fr.txt
Injector: Converting injected urls to crawl db entries.
Exception in thread "main" java.io.IOException: Input directory /user/webadm/urls/url-fr.txt in localhost:9000 is invalid.
������� at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:274)
������� at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:327)
������� at org.apache.nutch.crawl.Injector.inject(Injector.java:138)
������� at org.apache.nutch.crawl.Crawl.main(Crawl.java:105)
������ �
������ �
*******************************************************************

In the nutch-0.8.1/logs/namenodep.log, I have this error

2007-01-19 20:05:26,522 WARN� fs.FSNamesystem - Replication requested of 2 is larger than cluster size (1). Using cluster size.
2007-01-19 20:05:26,523 WARN� fs.FSNamesystem - Zero targets found, forbidden1.size=1 forbidden2.size()=0... 16 more


In my /opt/nutch-0.8.1/conf/hadoop-env.sh and /opt/nutch-0.8.1/conf/hadoop-env.sh,
I Have:

export HADOOP_HOME=/opt/nutch-0.8.1
export JAVA_HOME=/logiciels/java/jdk1.5.0_10
export HADOOP_HEAPSIZE=1000
export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
export HADOOP_PID_DIR=/opt/nutch-0.8.1/pids

****************************************

In my /opt/nutch-0.8.1/conf/hadoop-site.xml
I Have these values with xml tags of course:

hadoop.tmp.dir = /opt/nutch-0.8.1/HDFS
fs.default.name = localhost:9000
dfs.name.dir = /opt/nutch-0.8.1/HDFS/dfs/name
dfs.client.buffer.dir = /opt/nutch-0.8.1/HDFS/dfs/tmp
dfs.data.dir = /opt/nutch-0.8.1/HDFS/dfs/data
dfs.replication = 1
mapred.job.tracker = localhost:9001
mapred.local.dir = /opt/nutch-0.8.1/HDFS/mapred/local
mapred.system.dir = /opt/nutch-0.8.1/HDFS/mapred/system
mapred.temp.dir = /opt/nutch-0.8.1/HDFS/mapred/temp
mapred.map.tasks = 2
mapred.reduce.tasks = 1


In my conf/crawl-urlfilter.txt, I have this line to index:
+^http://www.mywebsite.com

In my urls/url-mywebsite, I have this line to index:
http://www.mywebsite.com/index.htm

In my conf/nutch-site.xml, I have this line:
searcher.dir = crawls/crawl-mywebsite

Thanks in advance.

Yannick LE NY


________________________________________________________________________
iFRANCE, exprimez-vous !
http://web.ifrance.com

Re: Input directory urls/url-fr.txt in localhost:9000 is invalid with Hadoop 0.4.0patched and Nutch 0.8.1

Posted by Andrzej Bialecki <ab...@getopt.org>.
yleny@ifrance.com wrote:
> Hello,
>
> I can not index website with Nutch and Hadoop.
> I spend 15 days to try that nutch 0.8.1 work but with no success.
>
> I use :
> * jdk1.5.0_10 or 1.4.2_13 (i have the same problem with these 2 JDK)
> * Nutch-0.8.1
> * Hadoop-0.4.0 from Nutch-0.8.1
> And I made configuration with http://wiki.apache.org/nutch/NutchHadoopTutorial
> I have ONLY one server.
>
> I start Hadoop-0.4.0 (start-all.sh) with no errors in the logs.
>
> The directories for crawl and url are created.
> I use dfs put command line to put them in NDFS File System.
>
> [nutch-0.8.1]$ bin/hadoop dfs -lsr
> /user/webadm/crawls     <dir>
> /user/webadm/urls       <dir>
> /user/webadm/urls/url-fr.txt        <r 1>   44
>
> And when I crawl with nutch 0.8.1, I have this message error:
>
> [nutch-0.8.1]$ bin/nutch crawl urls/url-fr.txt  -dir crawls/crawl-fr -depth 10  -topN 50
> crawl started in: crawls/crawl-fr
> rootUrlDir = urls/url-fr.txt
> threads = 10
> depth = 10
> topN = 50
> Injector: starting
> Injector: crawlDb: crawls/crawl-fr/crawldb
> Injector: urlDir: urls/url-fr.txt
> Injector: Converting injected urls to crawl db entries.
> Exception in thread "main" java.io.IOException: Input directory /user/webadm/urls/url-fr.txt in localhost:9000 is invalid.
>         at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:274)
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:327)
>         at org.apache.nutch.crawl.Injector.inject(Injector.java:138)
>         at org.apache.nutch.crawl.Crawl.main(Crawl.java:105)
>         
>   


.. and that's because "urlDir: urls/url-fr.txt" is not a directory, but 
a file. You should give only the "urls" as the input directory - Nutch 
will read all text files inside the directory.

-- 
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com