You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/27 20:09:11 UTC

[GitHub] anirudhacharya edited a comment on issue #14262: Fix NaN value comparisons in relu, max and min ops

anirudhacharya edited a comment on issue #14262: Fix NaN value comparisons in relu, max and min ops
URL: https://github.com/apache/incubator-mxnet/pull/14262#issuecomment-468011149
 
 
   @szha I ideally do not want `relu` operator to clip `NaN` values to zero, especially when I am trying to debug a model. 
   And with regards to `maximum` and `minimum` it is not about 'starting to support' `nan` values but to fix inconsistent handling of `nan` values.
   
   Pytorch's `relu` behavior -
   ```
   >>> import torch
   >>> import torch.nn as nn
   >>> m = nn.ReLU()
   >>> input = np.NaN * torch.ones(1)
   >>> out = m(input)
   >>> out
   tensor([nan])
   ```
   
   Also I found a related issue here - https://github.com/apache/incubator-mxnet/issues/14157
   
   Edit - Another issue filed some time ago which had slipped from my memory - https://github.com/apache/incubator-mxnet/issues/11115

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services