You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by "Andrew Palumbo (JIRA)" <ji...@apache.org> on 2014/09/14 00:59:33 UTC
[jira] [Created] (MAHOUT-1615) SparkEngine drmFromHDFS returning
the same Key for all Key,VecPairs for Text-Keyed Files
Andrew Palumbo created MAHOUT-1615:
--------------------------------------
Summary: SparkEngine drmFromHDFS returning the same Key for all Key,VecPairs for Text-Keyed Files
Key: MAHOUT-1615
URL: https://issues.apache.org/jira/browse/MAHOUT-1615
Project: Mahout
Issue Type: Bug
Reporter: Andrew Palumbo
Fix For: 1.0
When reading in seq2sparse output from HDFS in the spark-shell of form <Text,VectorWriteable> SparkEngine's drmFromHDFS method is creating rdds with the same Key for all Pairs:
`mahout> val drmTFIDF= drmFromHDFS( path = "/tmp/mahout-work-andy/20news-test-vectors/part-r-00000")`
Has keys:
{...}
key: /talk.religion.misc/84570
key: /talk.religion.misc/84570
key: /talk.religion.misc/84570
{...}
for the entire set. This is the last Key in the set.
The problem can be traced to SparkEngine.scala drmFromHDFS:
`val rdd = sc.sequenceFile(path, classOf[Writable], classOf[VectorWritable], minPartitions = parMin)
// Get rid of VectorWritable
.map(t => (t._1, t._2.get()))`
which gives the same key for all t._1.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)