You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/04/04 01:20:05 UTC

[GitHub] [incubator-mxnet] RuRo commented on issue #17953: Models saved at different training stages with different forward speeds

RuRo commented on issue #17953: Models saved at different training stages with different forward speeds
URL: https://github.com/apache/incubator-mxnet/issues/17953#issuecomment-608949381
 
 
   Hi. I've previously had a similar problem, which turned out to be due to [denormal numbers](https://en.wikipedia.org/wiki/Denormal_number) in the model weights. This issue only affects inference times on the CPU, since on some modern CPUs floating-point computations are significantly slower for denormal numbers.
   
   In my experience, denormal numbers often appear in model weights, when you have vanishing gradients and/or recurrent neural connections. The fix for me was to simply manually set all the denormalized weights to 0. This shouldn't significantly change the outputs of your model since the denormalized numbers are so small in the first place.
   
   ```python
   import numpy as np
   import mxnet as mx
   
   model = com.load_hybrid(model_path)
   for p, v in model.collect_params().items():
       vd = v.data().asnumpy()
       vd[np.abs(vd) < 1e-37] = 0.0
       v.set_data(mx.nd.array(vd))
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services