You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by Ted Dunning <te...@gmail.com> on 2010/09/22 06:20:19 UTC

OnlineAuc testing

I just did some detailed testing of OnlineAuc.  Somewhat surprisingly (to
me, at least) the FAIR policy was decidedly sub-optimal and FIFO is the
best.

Also, at 10,000 samples and a window of 10, OnlineAuc is almost identically
accurate as Auc.  A window of 100 increases the accuracy very minutely, but
not anything like what it costs.  Decreasing the window to 2 decreases
accuracy noticeably for the FAIR and RANDOM policies, but causes almost no
change for the FIFO policy.

As a result of these tests, I have set the window to 10 and the default
policy to FIFO.

Re: OnlineAuc testing

Posted by Drew Farris <dr...@apache.org>.
This is good to know Ted. Thanks for sharing your results. We must
make sure bits like this make it on to the wiki page once the docs are
put together, hopefully I'll find the cycles to help out with that.

On Wed, Sep 22, 2010 at 12:20 AM, Ted Dunning <te...@gmail.com> wrote:
> I just did some detailed testing of OnlineAuc.  Somewhat surprisingly (to
> me, at least) the FAIR policy was decidedly sub-optimal and FIFO is the
> best.
>
> Also, at 10,000 samples and a window of 10, OnlineAuc is almost identically
> accurate as Auc.  A window of 100 increases the accuracy very minutely, but
> not anything like what it costs.  Decreasing the window to 2 decreases
> accuracy noticeably for the FAIR and RANDOM policies, but causes almost no
> change for the FIFO policy.
>
> As a result of these tests, I have set the window to 10 and the default
> policy to FIFO.
>