You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/11/24 10:43:37 UTC

[GitHub] beeva-enriqueotero opened a new issue #8807: Incoherent training and validation metrics in recommender system example

beeva-enriqueotero opened a new issue #8807: Incoherent training and validation metrics in recommender system example
URL: https://github.com/apache/incubator-mxnet/issues/8807
 
 
   In the recommender system examples, the [Binary Predictions notebook](https://github.com/apache/incubator-mxnet/blob/master/example/recommenders/demo2-binary.ipynb) uses different metrics for training and validation. Training is using MAERegressionOutput within CosineLoss whereas eval_metric is RMSE, as defined in [matrix_fac.py](http://localhost:8888/edit/src/mxnet/example/recommenders/matrix_fact.py)
   
   MAECosineLoss jupyter notebook examplebinary recommender examples
   
   In other examples, when using LinearRegressionOutput, training metric is the same as RMSE. But this is not the case for general output layers. And moreover I don't find a way to use a typical metric as crossentropy for training and validation. This problem is related to https://github.com/apache/incubator-mxnet/issues/6179

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services