You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by Floris Devriendt <fl...@gmail.com> on 2014/05/20 14:51:57 UTC

Recommender Systems - RecommenderIRStatsEvaluator

Hey all,

The *RecommenderEvaluator *has the option to choose how big your training
set is (and so choosing the test set size as well), but the
*RecommenderIRStatsEvaluator* does not seem to have this argument in its
*.evaluate()*-method. That's why I was wondering how the internals of the
*RecommenderIRStatsEvaluator* work.

I have the following questions on *RecommenderIRStatsEvaluator*:

   - Is there a way to specify the train and test set like you can with the
   *RecommenderEvaluator*?
   - Is it possible to perform k-fold cross-validation with the
   *RecommenderIRStatsEvaluator*?
   - How does the default way of evaluation work with
   *RecommenderIRStatsEvaluator*?

If somebody has an answer to any of these questions it would be greatly
appreciated.

Kind regards,
Floris Devriendt

Re: Recommender Systems - RecommenderIRStatsEvaluator

Posted by Tevfik Aytekin <te...@gmail.com>.
>    - Is there a way to specify the train and test set like you can with the
>    *RecommenderEvaluator*?
No, though you can specify the evaluation percentage. This is because
of the logic of evaluation. The logic is to take away relevant items
and then make recommendations and see whether the relevant items
appear in top-N lists. It is also possible (and I think in some ways
better) to first split the data into test and training and select
relevant items from the test set. But this is not how it is
implemented.

>    - Is it possible to perform k-fold cross-validation with the
>    *RecommenderIRStatsEvaluator*?
I don't think so.
>    - How does the default way of evaluation work with
>    *RecommenderIRStatsEvaluator*?
I tried to explain it above.

I would like to remind that it is not difficult to write your own
evaluation code for your specific purposes.

Tevfik


On Tue, May 20, 2014 at 3:51 PM, Floris Devriendt
<fl...@gmail.com> wrote:
> Hey all,
>
> The *RecommenderEvaluator *has the option to choose how big your training
> set is (and so choosing the test set size as well), but the
> *RecommenderIRStatsEvaluator* does not seem to have this argument in its
> *.evaluate()*-method. That's why I was wondering how the internals of the
> *RecommenderIRStatsEvaluator* work.
>
> I have the following questions on *RecommenderIRStatsEvaluator*:
>
>    - Is there a way to specify the train and test set like you can with the
>    *RecommenderEvaluator*?
>    - Is it possible to perform k-fold cross-validation with the
>    *RecommenderIRStatsEvaluator*?
>    - How does the default way of evaluation work with
>    *RecommenderIRStatsEvaluator*?
>
> If somebody has an answer to any of these questions it would be greatly
> appreciated.
>
> Kind regards,
> Floris Devriendt