You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by "Samee Zahur (JIRA)" <ji...@apache.org> on 2008/04/04 08:49:25 UTC
[jira] Updated: (MAHOUT-24) Skeletal LWLR implementation
[ https://issues.apache.org/jira/browse/MAHOUT-24?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Samee Zahur updated MAHOUT-24:
------------------------------
Attachment: LWLR.patch.tar.bz2
Output format:
it outputs n lines where n is the number of dimensions
ith line = sum(x[i]*x[ind]) where ind is the index of the independent variable
For the classical 2D case, this means the regression line's gradient is given by (1st line value)/(2nd line value).
Many other improvements are definitely possible, including a host of commonly used options (lines not passing through the origin, weight function as a function of x rather than being specified on file etc.). But all those are doable, and shouldn't take too long.
Samee
> Skeletal LWLR implementation
> ----------------------------
>
> Key: MAHOUT-24
> URL: https://issues.apache.org/jira/browse/MAHOUT-24
> Project: Mahout
> Issue Type: New Feature
> Environment: n/a
> Reporter: Samee Zahur
> Attachments: LWLR.patch.tar.bz2
>
>
> This is a very skeletal but functional implementation for LWLR. It outputs n lines where n is the number of dimensions. ith line = sum(x[i]*x[ind]) where ind is the index of independant variable. So the actual gradient = 2nd line/1st line for the classical 2D.
> Contains a single small test case for demonstration.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.