You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2017/12/21 23:35:39 UTC

[GitHub] kalpitdixit commented on issue #9171: MXNet: Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging of training run.

kalpitdixit commented on issue #9171: MXNet: Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging of training run.
URL: https://github.com/apache/incubator-mxnet/issues/9171#issuecomment-353484524
 
 
   I am using:
   MXNet==1.0.0
   CUDA==9.0
   cuDNN==7.0
   
   As I understand, the FusedRNNCell is faster than the unused RNNCell because it makes direct function calls to a cuda kernel. It seems that the "bidirectional" flag in FusedRNNCell is directly passed to the cuda kernel call. This is just fyi. This might imply some cuda kernel issue but I am not a cuda expert.
   
   @eric-haibin-lin 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services