You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by alt <al...@gmail.com> on 2014/09/01 14:45:15 UTC

Re: UserBasedRecommender question

Ted, could you please explain this point:

> If you only have 20 categories, I would recommend that you consider using
> different technologies than recommendations.  Simply building 20
> classifiers is likely to be as effective or more so.

Suppose, we want to build a classifier to predict a category N as a
"label". Аnd we train it on whole user data or on a representative
sample. Then classifier believes that all combinations of features,
which it met with users, not interested in N, imply that user not
interested in N. So, if this classifier feats data well, it will give
us no positive answers for users from same data, not interested in N
yet. Isn`t it?
This only possible after some time – when data significantly changes. Right?

WBR Oleg



> From: Ted Dunning <te...@gmail.com>
> To: "user@mahout.apache.org" <us...@mahout.apache.org>
> Cc:
> Date: Wed, 6 Aug 2014 12:16:31 -0600
> Subject: Re: UserBasedRecommender question
> If you only have 20 categories, I would recommend that you consider using
> different technologies than recommendations.  Simply building 20
> classifiers is likely to be as effective or more so.