You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by Ted Dunning <te...@gmail.com> on 2009/11/09 08:09:47 UTC

Re: Re: Re: got Error: GC overhead limit exceeded when generateproductsimilariy

Close.

See the link below for one approach to finding the most important ones.  I
believe that Sean has added something like this to Taste/Mahout.

http://tdunning.blogspot.com/2008/03/surprise-and-coincidence.html

On Sun, Nov 8, 2009 at 10:51 PM, Yi Wang <wa...@yahoo.com.cn>wrote:

> Maybe Ted means the top ones.
>
> --- 09年11月9日,周一, cumtyjh <cu...@163.com> 写道:
>
> 发件人: cumtyjh <cu...@163.com>
> 主题: Re: Re: Re: got Error: GC overhead limit exceeded when
> generateproductsimilariy
> 收件人: "mahout-user" <ma...@lucene.apache.org>
> 日期: 2009年11月9日,周一,下午2:42
>
>
> i am a new guy on recommendation, what is the meaning of  "significant
> ones"?
>
> 2009-11-09
>
>
>
> cumtyjh
>
>
>
> 发件人: Ted Dunning
> 发送时间: 2009-11-09  14:35:51
> 收件人: mahout-user
> 抄送:
> 主题: Re: Re: got Error: GC overhead limit exceeded when
> generateproductsimilariy
>
> You shouldn't be generating all item-item links.  You only want the
> significant ones.
> On Sun, Nov 8, 2009 at 8:31 PM, cumtyjh <cu...@163.com> wrote:
> > i want to generate item-item similarity offline, then i can use it for
> > recommendation.
> --
> Ted Dunning, CTO
> DeepDyve
>
>
>
>       ___________________________________________________________
>  好玩贺卡等你发,邮箱贺卡全新上线!
> http://card.mail.cn.yahoo.com/
>



-- 
Ted Dunning, CTO
DeepDyve

Re: Re: Re: got Error: GC overhead limit exceeded whengenerateproductsimilariy

Posted by cumtyjh <cu...@163.com>.
get it


thanks you all.

2009-11-09 



cumtyjh 



发件人: Ted Dunning 
发送时间: 2009-11-09  15:10:48 
收件人: mahout-user 
抄送: 
主题: Re: Re: Re: got Error: GC overhead limit exceeded whengenerateproductsimilariy 
 
Close.
See the link below for one approach to finding the most important ones.  I
believe that Sean has added something like this to Taste/Mahout.
http://tdunning.blogspot.com/2008/03/surprise-and-coincidence.html
On Sun, Nov 8, 2009 at 10:51 PM, Yi Wang <wa...@yahoo.com.cn>wrote:
> Maybe Ted means the top ones.
>
> --- 09年11月9日,周一, cumtyjh <cu...@163.com> 写道:
>
> 发件人: cumtyjh <cu...@163.com>
> 主题: Re: Re: Re: got Error: GC overhead limit exceeded when
> generateproductsimilariy
> 收件人: "mahout-user" <ma...@lucene.apache.org>
> 日期: 2009年11月9日,周一,下午2:42
>
>
> i am a new guy on recommendation, what is the meaning of  "significant
> ones"?
>
> 2009-11-09
>
>
>
> cumtyjh
>
>
>
> 发件人: Ted Dunning
> 发送时间: 2009-11-09  14:35:51
> 收件人: mahout-user
> 抄送:
> 主题: Re: Re: got Error: GC overhead limit exceeded when
> generateproductsimilariy
>
> You shouldn't be generating all item-item links.  You only want the
> significant ones.
> On Sun, Nov 8, 2009 at 8:31 PM, cumtyjh <cu...@163.com> wrote:
> > i want to generate item-item similarity offline, then i can use it for
> > recommendation.
> --
> Ted Dunning, CTO
> DeepDyve
>
>
>
>       ___________________________________________________________
>  好玩贺卡等你发,邮箱贺卡全新上线!
> http://card.mail.cn.yahoo.com/
>
-- 
Ted Dunning, CTO
DeepDyve

Re: Re: Re: got Error: GC overhead limit exceeded when generateproductsimilariy

Posted by Ted Dunning <te...@gmail.com>.
On Mon, Nov 9, 2009 at 4:57 AM, Sean Owen <sr...@gmail.com> wrote:

> Ted will say, and I again I agree, that Pearson is not usually the
> best similarity metric, though it is widely mentioned in collaborative
> filtering examples and literature.
>

You said it!  I don't need to.


>  What Ted quotes below is implemented in the framework as
> LogLikelihoodSimilarity. For that, I believe it *is* the pairs with
> the largest resulting similarity score that you do want to keep. Or at
> least it is more reasonable. Ted maybe you can check my thinking on
> that.
>

Yes.  And you don't even need the score in the end, just the fact that it
passed the threshold.  I typically weight the pairing by IDF score of the
source item.



-- 
Ted Dunning, CTO
DeepDyve

Re: Re: Re: got Error: GC overhead limit exceeded when generateproductsimilariy

Posted by Sean Owen <sr...@gmail.com>.
Yes, I agree that keeping all pairs is quite expensive, unless your
data set is relatively small (like tens of thousands of items). If
you're not running out of memory, OK, you can get away with it for
now.

But yes, many of the similarities will not contain much information
and don't add much value -- the question is, which ones?

For Pearson correlation-based similarity, it's not just a matter of
keeping the ones with the largest and smallest similarity scores --
nearest 1 or -1. A similarity of 0 could still be very useful
information. I think you would actually want to keep an item-item pair
based on how many users expressed a preference for both items. The
more, the more important it is to keep that pair.

If you'd like an example of efficiently looking through a large list
of things, and keeping only the "top n" of them, see the TopItems
class. You don't want to generate all pairs at once, then throw some
away -- that would still run you out of memory.

Ted will say, and I again I agree, that Pearson is not usually the
best similarity metric, though it is widely mentioned in collaborative
filtering examples and literature.

What Ted quotes below is implemented in the framework as
LogLikelihoodSimilarity. For that, I believe it *is* the pairs with
the largest resulting similarity score that you do want to keep. Or at
least it is more reasonable. Ted maybe you can check my thinking on
that.

Sean

On Mon, Nov 9, 2009 at 7:09 AM, Ted Dunning <te...@gmail.com> wrote:
> Close.
>
> See the link below for one approach to finding the most important ones.  I
> believe that Sean has added something like this to Taste/Mahout.
>
> http://tdunning.blogspot.com/2008/03/surprise-and-coincidence.html