You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by "Dan Brickley (JIRA)" <ji...@apache.org> on 2011/02/25 10:13:38 UTC
[jira] Commented: (MAHOUT-180) port Hadoop-ified Lanczos SVD
implementation from decomposer
[ https://issues.apache.org/jira/browse/MAHOUT-180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12999274#comment-12999274 ]
Dan Brickley commented on MAHOUT-180:
-------------------------------------
This looks great, but a little more documentation would really help those of us new to Mahout.
(perhaps in https://cwiki.apache.org/MAHOUT/svd-singular-value-decomposition.html ?)
I would jump in and help document but I'd like to be happy I'm understanding things correctly first. Right now, I'm not.
Couple of trivial things that tripped me:
- the example above uses 'hadoop -jar' but I found (using Hadoop 0.20.2+737) that I needed 'hadoop jar' (no hyphen).
- the example has "--numRows 0 (currently ignored, not needed)"; is this still not needed? text output suggests it is used now
Conceptually (from a broad-brush understanding of SVD) I was initially expecting 3 matrices back, not a single eigenvectors matrix; am happy to RTFM there and brush up on the linear algebra but some pointers would really help. Is it possible to get the decomposition into U, s and V?
> port Hadoop-ified Lanczos SVD implementation from decomposer
> ------------------------------------------------------------
>
> Key: MAHOUT-180
> URL: https://issues.apache.org/jira/browse/MAHOUT-180
> Project: Mahout
> Issue Type: New Feature
> Components: Math
> Affects Versions: 0.2
> Reporter: Jake Mannix
> Assignee: Jake Mannix
> Priority: Minor
> Fix For: 0.3
>
> Attachments: MAHOUT-180.patch, MAHOUT-180.patch, MAHOUT-180.patch, MAHOUT-180.patch, MAHOUT-180.patch
>
>
> I wrote up a hadoop version of the Lanczos algorithm for performing SVD on sparse matrices available at http://decomposer.googlecode.com/, which is Apache-licensed, and I'm willing to donate it. I'll have to port over the implementation to use Mahout vectors, or else add in these vectors as well.
> Current issues with the decomposer implementation include: if your matrix is really big, you need to re-normalize before decomposition: find the largest eigenvalue first, and divide all your rows by that value, then decompose, or else you'll blow over Double.MAX_VALUE once you've run too many iterations (the L^2 norm of intermediate vectors grows roughly as (largest-eigenvalue)^(num-eigenvalues-found-so-far), so losing precision on the lower end is better than blowing over MAX_VALUE). When this is ported to Mahout, we should add in the capability to do this automatically (run a couple iterations to find the largest eigenvalue, save that, then iterate while scaling vectors by 1/max_eigenvalue).
--
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira