You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/09/15 11:49:59 UTC

[GitHub] [incubator-mxnet] QueensGambit commented on issue #16173: Saving and loading cudNN autotune and graph optimization

QueensGambit commented on issue #16173: Saving and loading cudNN autotune and graph optimization
URL: https://github.com/apache/incubator-mxnet/issues/16173#issuecomment-531559033
 
 
   Thank you for the reply @pengzhao-intel. I updated the description on MKLDNN.
   
   I see the point about portability and backward compatibility issues.
   Maybe it is better to define `optimize` as a string argument which must be in `{'on_bind', 'save_reload', 'disabled'}`:
   
   ```python
   def bind(self, ctx, args, args_grad=None, grad_req='write',
                  aux_states=None, group2ctx=None, shared_exec=None, optimize='on_bind'):
   """
      # ...
       optimize : str, optional, default 'on_bind'
                       must be in {'on_bind', 'save_reload', 'disabled'}
                       'on_bind': Graph optimization / cuDNN autotune is executed during model binding
                       'save_reload': MXNet attempts to recover previous optimization information. 
                                      Otherwise MXNet will perform optimization and save it to disk.
                       'disabled': No graph optimization / cuDNN autotune is performed
   """
   ```
   
   In the default case `optimize='on_bind'`, it will behave the same way as currently and all previous code will behave the same.
   
   As a different aspect, it might be preferable to treat graph optimization (MKLDNN graph optimization / TensorRT graph fusion) as a different entity compared to cudNN autotune because cudNN autotune might also be performed on fused graphs in future versions.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services