You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/02/28 10:53:00 UTC

[GitHub] IvyGongoogle opened a new issue #14282: float16 does not work for `rnn` model

IvyGongoogle opened a new issue #14282: float16 does not work for `rnn` model
URL: https://github.com/apache/incubator-mxnet/issues/14282
 
 
   Hello, I use this [predict-cpp](https://github.com/apache/incubator-mxnet/tree/master/example/image-classification/predict-cpp)  to infer some images.  
   When I use `float16` to infer a `cnn` model and it shows the speed is much faster than using `float32`. But when I use `float16` to infer a `rnn` model , the speed is nearly same with using `float32`. 
   So what causes this? can you give some advises?
   
   Looking forward to your reply.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services