You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by tmefrt <gk...@yahoo.com> on 2012/09/06 03:40:30 UTC
Error running RecommenderJob using mahout-core-0.5-cdh3u4-job.jar
Hi All
I'm trying to test the item recommendation. using the command
hadoop jar /usr/lib/mahout/mahout-core-0.5-cdh3u4-job.jar
org.apache.mahout.cf.taste.hadoop.item.RecommenderJob
-Dmapred.input.dir=/user/etl_user/itemrecco/in_file.txt
-Dmapred.output.dir=/user/etl_user/itemreccooutput
Input file
cat in_file.txt
1,101,5.0
1,102,3.0
1,103,2.5
2,101,2.0
2,102,2.5
2,103,5.0
2,104,2.0
3,101,2.5
3,104,4.0
3,105,4.5
3,107,5.0
4,101,5.0
4,103,3.0
4,104,4.5
4,106,4.0
5,101,4.0
5,102,3.0
5,103,2.0
5,104,4.0
5,105,3.5
5,106,4.0
I'm getting below error from the log
12/09/06 01:28:28 INFO mapred.JobClient:
org.apache.mahout.cf.taste.hadoop.MaybePruneRowsMapper$Elements
12/09/06 01:28:28 INFO mapred.JobClient: NEGLECTED=0
12/09/06 01:28:28 INFO mapred.JobClient: USED=21
12/09/06 01:28:28 INFO mapred.JobClient: Job Counters
12/09/06 01:28:28 INFO mapred.JobClient: Launched reduce tasks=72
12/09/06 01:28:28 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=142128
12/09/06 01:28:28 INFO mapred.JobClient: Total time spent by all reduces
waiting after reserving slots (ms)=0
12/09/06 01:28:28 INFO mapred.JobClient: Total time spent by all maps
waiting after reserving slots (ms)=0
12/09/06 01:28:28 INFO mapred.JobClient: Launched map tasks=72
12/09/06 01:28:28 INFO mapred.JobClient: Data-local map tasks=72
12/09/06 01:28:28 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=806019
12/09/06 01:28:28 INFO mapred.JobClient: FileSystemCounters
12/09/06 01:28:28 INFO mapred.JobClient: FILE_BYTES_READ=1755source
12/09/06 01:28:28 INFO mapred.JobClient: HDFS_BYTES_READ=18905
12/09/06 01:28:28 INFO mapred.JobClient: FILE_BYTES_WRITTEN=199593
12/09/06 01:28:28 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=7222
12/09/06 01:28:28 INFO mapred.JobClient: Map-Reduce Framework
12/09/06 01:28:28 INFO mapred.JobClient: Reduce input groups=7
12/09/06 01:28:28 INFO mapred.JobClient: Combine output records=0
12/09/06 01:28:28 INFO mapred.JobClient: Map input records=5
12/09/06 01:28:28 INFO mapred.JobClient: Reduce shuffle bytes=71922
12/09/06 01:28:28 INFO mapred.JobClient: Reduce output records=7
12/09/06 01:28:28 INFO mapred.JobClient: Spilled Records=42
12/09/06 01:28:28 INFO mapred.JobClient: Map output bytes=420
12/09/06 01:28:28 INFO mapred.JobClient: Combine input records=0
12/09/06 01:28:28 INFO mapred.JobClient: Map output records=21
12/09/06 01:28:28 INFO mapred.JobClient: SPLIT_RAW_BYTES=11304
12/09/06 01:28:28 INFO mapred.JobClient: Reduce input records=21
12/09/06 01:28:28 ERROR common.AbstractJob: Unexpected 101 while processing
Job-Specific Options:
usage: <command> [Generic Options] [Job-Specific Options]
Generic Options:
-archives <paths> comma separated archives to be unarchived
on the compute machines.
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-files <paths> comma separated files to be copied to the
map reduce cluster
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-libjars <paths> comma separated jar files to include in
the classpath.
-tokenCacheFile <tokensFile> name of the file with the tokens
Unexpected 101 while processing Job-Specific Options:
Usage:
[--input <input> --output <output> --numberOfColumns <numberOfColumns>
--similarityClassname <similarityClassname> --maxSimilaritiesPerRow
<maxSimilaritiesPerRow> --help --tempDir <tempDir> --startPhase <startPhase>
--endPhase <endPhase>]
Job-Specific Options:
--input (-i) input Path to job input
directory.
--output (-o) output The directory
pathname
for output.
--numberOfColumns (-r) numberOfColumns Number of columns in
the input matrix
--similarityClassname (-s) similarityClassname Name of distributed
similarity class to
instantiate,
alternatively use
one
of the predefined
similarities
([SIMILARITY_COOCCURRENC
E,
SIMILARITY_EUCLIDEAN_DIS
TANCE,
SIMILARITY_LOGLIKELIHOOD
,
SIMILARITY_PEARSON_CORRE
LATION,
SIMILARITY_TANIMOTO_COEF
FICIENT,
SIMILARITY_UNCENTERED_CO
SINE,
SIMILARITY_UNCENTERED_ZE
RO_ASSUMING_COSINE,
SIMILARITY_CITY_BLOCK])
--maxSimilaritiesPerRow (-m) maxSimilaritiesPerRow Number of maximum
similarities per row
(default: 100)
--help (-h) Print out help
--tempDir tempDir Intermediate output
directory
--startPhase startPhase First phase to run
--endPhase endPhase Last phase to run
12/09/06 01:28:28 INFO mapred.JobClient: Cleaning up the staging area
hdfs://hadoop-namenode-2.v39.ch3.caracal.com/tmp/hadoop-mapred/mapred/staging/etl_user/.staging/job_201205291818_31228
Exception in thread "main"
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does
not exist: temp/similarityMatrix
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
at
org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:55)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
at
org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:899)
at
org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:916)
at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:834)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:793)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
at
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:793)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:465)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:495)
at
org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.run(RecommenderJob.java:239)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at
org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.main(RecommenderJob.java:333)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
Greatly appreciate, you help in identifying..
--
View this message in context: http://lucene.472066.n3.nabble.com/Error-running-RecommenderJob-using-mahout-core-0-5-cdh3u4-job-jar-tp4005786.html
Sent from the Mahout User List mailing list archive at Nabble.com.
Re: Error running RecommenderJob using mahout-core-0.5-cdh3u4-job.jar
Posted by Tmefrt <gk...@yahoo.com>.
Tried running below
mahout recommenditembased --input /user/etl_user/itemrecco --output
/user/etl_user/itemreccooutput --usersFile /user/etl_user/users.txt
Stuck at same job, and same error.
Re: Error running RecommenderJob using mahout-core-0.5-cdh3u4-job.jar
Posted by Sean Owen <sr...@gmail.com>.
This is just a follow-on error since the intermediate result was not
created for the next stage. This is not the problem, nor is the output
directory. It is as I said, the -D args.
On Thu, Sep 6, 2012 at 9:45 AM, A Geek <dw...@live.com> wrote:
>
> hi, I just went through the log and found this error msg: > Exception in thread "main"> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does> not exist: temp/similarityMatrix> at> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
>
> Can you have a look at the specified path and ensure that the said folder exits. HTH.
RE: Error running RecommenderJob using
mahout-core-0.5-cdh3u4-job.jar
Posted by A Geek <dw...@live.com>.
hi, I just went through the log and found this error msg: > Exception in thread "main"> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does> not exist: temp/similarityMatrix> at> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
Can you have a look at the specified path and ensure that the said folder exits. HTH.
Thanks, KK
> Date: Wed, 5 Sep 2012 18:40:30 -0700
> From: gkodumur@yahoo.com
> To: mahout-user@lucene.apache.org
> Subject: Error running RecommenderJob using mahout-core-0.5-cdh3u4-job.jar
>
>
> Hi All
>
> I'm trying to test the item recommendation. using the command
>
>
> hadoop jar /usr/lib/mahout/mahout-core-0.5-cdh3u4-job.jar
> org.apache.mahout.cf.taste.hadoop.item.RecommenderJob
> -Dmapred.input.dir=/user/etl_user/itemrecco/in_file.txt
> -Dmapred.output.dir=/user/etl_user/itemreccooutput
>
> Input file
>
> cat in_file.txt
> 1,101,5.0
> 1,102,3.0
> 1,103,2.5
> 2,101,2.0
> 2,102,2.5
> 2,103,5.0
> 2,104,2.0
> 3,101,2.5
> 3,104,4.0
> 3,105,4.5
> 3,107,5.0
> 4,101,5.0
> 4,103,3.0
> 4,104,4.5
> 4,106,4.0
> 5,101,4.0
> 5,102,3.0
> 5,103,2.0
> 5,104,4.0
> 5,105,3.5
> 5,106,4.0
>
>
> I'm getting below error from the log
>
>
> 12/09/06 01:28:28 INFO mapred.JobClient:
> org.apache.mahout.cf.taste.hadoop.MaybePruneRowsMapper$Elements
> 12/09/06 01:28:28 INFO mapred.JobClient: NEGLECTED=0
> 12/09/06 01:28:28 INFO mapred.JobClient: USED=21
> 12/09/06 01:28:28 INFO mapred.JobClient: Job Counters
> 12/09/06 01:28:28 INFO mapred.JobClient: Launched reduce tasks=72
> 12/09/06 01:28:28 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=142128
> 12/09/06 01:28:28 INFO mapred.JobClient: Total time spent by all reduces
> waiting after reserving slots (ms)=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Total time spent by all maps
> waiting after reserving slots (ms)=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Launched map tasks=72
> 12/09/06 01:28:28 INFO mapred.JobClient: Data-local map tasks=72
> 12/09/06 01:28:28 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=806019
> 12/09/06 01:28:28 INFO mapred.JobClient: FileSystemCounters
> 12/09/06 01:28:28 INFO mapred.JobClient: FILE_BYTES_READ=1755source
> 12/09/06 01:28:28 INFO mapred.JobClient: HDFS_BYTES_READ=18905
> 12/09/06 01:28:28 INFO mapred.JobClient: FILE_BYTES_WRITTEN=199593
> 12/09/06 01:28:28 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=7222
> 12/09/06 01:28:28 INFO mapred.JobClient: Map-Reduce Framework
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce input groups=7
> 12/09/06 01:28:28 INFO mapred.JobClient: Combine output records=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Map input records=5
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce shuffle bytes=71922
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce output records=7
> 12/09/06 01:28:28 INFO mapred.JobClient: Spilled Records=42
> 12/09/06 01:28:28 INFO mapred.JobClient: Map output bytes=420
> 12/09/06 01:28:28 INFO mapred.JobClient: Combine input records=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Map output records=21
> 12/09/06 01:28:28 INFO mapred.JobClient: SPLIT_RAW_BYTES=11304
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce input records=21
> 12/09/06 01:28:28 ERROR common.AbstractJob: Unexpected 101 while processing
> Job-Specific Options:
> usage: <command> [Generic Options] [Job-Specific Options]
> Generic Options:
> -archives <paths> comma separated archives to be unarchived
> on the compute machines.
> -conf <configuration file> specify an application configuration file
> -D <property=value> use value for given property
> -files <paths> comma separated files to be copied to the
> map reduce cluster
> -fs <local|namenode:port> specify a namenode
> -jt <local|jobtracker:port> specify a job tracker
> -libjars <paths> comma separated jar files to include in
> the classpath.
> -tokenCacheFile <tokensFile> name of the file with the tokens
> Unexpected 101 while processing Job-Specific Options:
> Usage:
> [--input <input> --output <output> --numberOfColumns <numberOfColumns>
> --similarityClassname <similarityClassname> --maxSimilaritiesPerRow
> <maxSimilaritiesPerRow> --help --tempDir <tempDir> --startPhase <startPhase>
> --endPhase <endPhase>]
> Job-Specific Options:
> --input (-i) input Path to job input
> directory.
> --output (-o) output The directory
> pathname
> for output.
> --numberOfColumns (-r) numberOfColumns Number of columns in
> the input matrix
> --similarityClassname (-s) similarityClassname Name of distributed
> similarity class to
> instantiate,
> alternatively use
> one
> of the predefined
> similarities
>
> ([SIMILARITY_COOCCURRENC
> E,
>
> SIMILARITY_EUCLIDEAN_DIS
> TANCE,
>
> SIMILARITY_LOGLIKELIHOOD
> ,
>
> SIMILARITY_PEARSON_CORRE
> LATION,
>
> SIMILARITY_TANIMOTO_COEF
> FICIENT,
>
> SIMILARITY_UNCENTERED_CO
> SINE,
>
> SIMILARITY_UNCENTERED_ZE
> RO_ASSUMING_COSINE,
>
> SIMILARITY_CITY_BLOCK])
> --maxSimilaritiesPerRow (-m) maxSimilaritiesPerRow Number of maximum
> similarities per row
> (default: 100)
> --help (-h) Print out help
> --tempDir tempDir Intermediate output
> directory
> --startPhase startPhase First phase to run
> --endPhase endPhase Last phase to run
> 12/09/06 01:28:28 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://hadoop-namenode-2.v39.ch3.caracal.com/tmp/hadoop-mapred/mapred/staging/etl_user/.staging/job_201205291818_31228
> Exception in thread "main"
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does
> not exist: temp/similarityMatrix
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
> at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:55)
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
> at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:899)
> at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:916)
> at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:834)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:793)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:793)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:465)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:495)
> at
> org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.run(RecommenderJob.java:239)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at
> org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.main(RecommenderJob.java:333)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
>
>
> Greatly appreciate, you help in identifying..
>
>
>
>
>
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Error-running-RecommenderJob-using-mahout-core-0-5-cdh3u4-job-jar-tp4005786.html
> Sent from the Mahout User List mailing list archive at Nabble.com.
Re: Error running RecommenderJob using mahout-core-0.5-cdh3u4-job.jar
Posted by Sean Owen <sr...@gmail.com>.
-D arguments are arguments to the JVM, not the program. This needs to
go in the "HADOOP_OPTS" env variable if using the hadoop binary.
On Thu, Sep 6, 2012 at 8:05 AM, Lee Carroll
<le...@googlemail.com> wrote:
> -Dmapred.output.dir=/user/etl_user/itemreccooutput
> should that be
> -Dmapred.output.dir=/user/etl_user/itemrecco/output
>
Re: Error running RecommenderJob using mahout-core-0.5-cdh3u4-job.jar
Posted by Lee Carroll <le...@googlemail.com>.
-Dmapred.output.dir=/user/etl_user/itemreccooutput
should that be
-Dmapred.output.dir=/user/etl_user/itemrecco/output
On 6 September 2012 02:40, tmefrt <gk...@yahoo.com> wrote:
>
> Hi All
>
> I'm trying to test the item recommendation. using the command
>
>
> hadoop jar /usr/lib/mahout/mahout-core-0.5-cdh3u4-job.jar
> org.apache.mahout.cf.taste.hadoop.item.RecommenderJob
> -Dmapred.input.dir=/user/etl_user/itemrecco/in_file.txt
> -Dmapred.output.dir=/user/etl_user/itemreccooutput
>
> Input file
>
> cat in_file.txt
> 1,101,5.0
> 1,102,3.0
> 1,103,2.5
> 2,101,2.0
> 2,102,2.5
> 2,103,5.0
> 2,104,2.0
> 3,101,2.5
> 3,104,4.0
> 3,105,4.5
> 3,107,5.0
> 4,101,5.0
> 4,103,3.0
> 4,104,4.5
> 4,106,4.0
> 5,101,4.0
> 5,102,3.0
> 5,103,2.0
> 5,104,4.0
> 5,105,3.5
> 5,106,4.0
>
>
> I'm getting below error from the log
>
>
> 12/09/06 01:28:28 INFO mapred.JobClient:
> org.apache.mahout.cf.taste.hadoop.MaybePruneRowsMapper$Elements
> 12/09/06 01:28:28 INFO mapred.JobClient: NEGLECTED=0
> 12/09/06 01:28:28 INFO mapred.JobClient: USED=21
> 12/09/06 01:28:28 INFO mapred.JobClient: Job Counters
> 12/09/06 01:28:28 INFO mapred.JobClient: Launched reduce tasks=72
> 12/09/06 01:28:28 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=142128
> 12/09/06 01:28:28 INFO mapred.JobClient: Total time spent by all
> reduces
> waiting after reserving slots (ms)=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Total time spent by all maps
> waiting after reserving slots (ms)=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Launched map tasks=72
> 12/09/06 01:28:28 INFO mapred.JobClient: Data-local map tasks=72
> 12/09/06 01:28:28 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=806019
> 12/09/06 01:28:28 INFO mapred.JobClient: FileSystemCounters
> 12/09/06 01:28:28 INFO mapred.JobClient: FILE_BYTES_READ=1755source
> 12/09/06 01:28:28 INFO mapred.JobClient: HDFS_BYTES_READ=18905
> 12/09/06 01:28:28 INFO mapred.JobClient: FILE_BYTES_WRITTEN=199593
> 12/09/06 01:28:28 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=7222
> 12/09/06 01:28:28 INFO mapred.JobClient: Map-Reduce Framework
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce input groups=7
> 12/09/06 01:28:28 INFO mapred.JobClient: Combine output records=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Map input records=5
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce shuffle bytes=71922
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce output records=7
> 12/09/06 01:28:28 INFO mapred.JobClient: Spilled Records=42
> 12/09/06 01:28:28 INFO mapred.JobClient: Map output bytes=420
> 12/09/06 01:28:28 INFO mapred.JobClient: Combine input records=0
> 12/09/06 01:28:28 INFO mapred.JobClient: Map output records=21
> 12/09/06 01:28:28 INFO mapred.JobClient: SPLIT_RAW_BYTES=11304
> 12/09/06 01:28:28 INFO mapred.JobClient: Reduce input records=21
> 12/09/06 01:28:28 ERROR common.AbstractJob: Unexpected 101 while processing
> Job-Specific Options:
> usage: <command> [Generic Options] [Job-Specific Options]
> Generic Options:
> -archives <paths> comma separated archives to be unarchived
> on the compute machines.
> -conf <configuration file> specify an application configuration file
> -D <property=value> use value for given property
> -files <paths> comma separated files to be copied to the
> map reduce cluster
> -fs <local|namenode:port> specify a namenode
> -jt <local|jobtracker:port> specify a job tracker
> -libjars <paths> comma separated jar files to include in
> the classpath.
> -tokenCacheFile <tokensFile> name of the file with the tokens
> Unexpected 101 while processing Job-Specific Options:
> Usage:
> [--input <input> --output <output> --numberOfColumns <numberOfColumns>
> --similarityClassname <similarityClassname> --maxSimilaritiesPerRow
> <maxSimilaritiesPerRow> --help --tempDir <tempDir> --startPhase
> <startPhase>
> --endPhase <endPhase>]
> Job-Specific Options:
> --input (-i) input Path to job input
> directory.
> --output (-o) output The directory
> pathname
> for output.
> --numberOfColumns (-r) numberOfColumns Number of columns
> in
> the input matrix
> --similarityClassname (-s) similarityClassname Name of distributed
> similarity class to
> instantiate,
> alternatively use
> one
> of the predefined
> similarities
>
> ([SIMILARITY_COOCCURRENC
> E,
>
> SIMILARITY_EUCLIDEAN_DIS
> TANCE,
>
> SIMILARITY_LOGLIKELIHOOD
> ,
>
> SIMILARITY_PEARSON_CORRE
> LATION,
>
> SIMILARITY_TANIMOTO_COEF
> FICIENT,
>
> SIMILARITY_UNCENTERED_CO
> SINE,
>
> SIMILARITY_UNCENTERED_ZE
> RO_ASSUMING_COSINE,
>
> SIMILARITY_CITY_BLOCK])
> --maxSimilaritiesPerRow (-m) maxSimilaritiesPerRow Number of maximum
> similarities per
> row
> (default: 100)
> --help (-h) Print out help
> --tempDir tempDir Intermediate output
> directory
> --startPhase startPhase First phase to run
> --endPhase endPhase Last phase to run
> 12/09/06 01:28:28 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://
> hadoop-namenode-2.v39.ch3.caracal.com/tmp/hadoop-mapred/mapred/staging/etl_user/.staging/job_201205291818_31228
> Exception in thread "main"
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path
> does
> not exist: temp/similarityMatrix
> at
>
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231)
> at
>
> org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:55)
> at
>
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248)
> at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:899)
> at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:916)
> at
> org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:834)
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:793)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
> at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:793)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:465)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:495)
> at
>
> org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.run(RecommenderJob.java:239)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at
>
> org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.main(RecommenderJob.java:333)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
>
>
> Greatly appreciate, you help in identifying..
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Error-running-RecommenderJob-using-mahout-core-0-5-cdh3u4-job-jar-tp4005786.html
> Sent from the Mahout User List mailing list archive at Nabble.com.
>