You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Madhav Sharan <ms...@usc.edu> on 2016/08/10 19:25:46 UTC

Pairwise similarity using map reduce

Hi hadoop users,

I have a set of vectors stored in .txt files on HDFS. Goal is to take every
pair of vector and compute similarity between them.

   1. We generate pairs of vectors by a python script and give it as a
   input to MR jobs. Input file has comma separated path to vector files. "
   */path/to/vec1*, *path/to/vec2*" .
   2. Then mapper tasks gets (Path1, Path2) and computes similarity.

To do this Mapper reads file at Path1 using HDFS API, reads File at Path2
using HDFS API. So, each file is read many many times due to the pairwise
calculation.

I am trying find a way so that I read file only once and my mapper jobs
receive contents of file rather than file path.

Can someone please share any technique they have used in past that might
help?

Thanks
--
Madhav Sharan