You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/05/06 19:02:03 UTC

[GitHub] reminisce commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

reminisce commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r186251436
 
 

 ##########
 File path: include/mxnet/c_api.h
 ##########
 @@ -1423,13 +1423,15 @@ MXNET_DLL int MXSymbolInferType(SymbolHandle sym,
  * \param excluded_symbols array of symbols to be excluded from being quantized
  * \param num_offline number of parameters that are quantized offline
  * \param offline_params array of c strings representing the names of params quantized offline
+ * \param dev_type device type 
  */
 MXNET_DLL int MXQuantizeSymbol(SymbolHandle sym_handle,
                                SymbolHandle *ret_sym_handle,
                                const mx_uint num_excluded_symbols,
                                const SymbolHandle *excluded_symbols,
                                const mx_uint num_offline,
-                               const char **offline_params);
+                               const char **offline_params,
+                               int dev_type);
 
 Review comment:
   I think `dev_type` is not specific enough to infer MKLDNN quantization does not need requantize op. There may be other CPU implementations that need requantize op. For example, we may use tvm to generate quantized conv kernels and requantization might be required there.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services