You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2018/11/25 01:55:03 UTC

[GitHub] marcoabreu commented on a change in pull request #13310: [MXNET-703] Update to TensorRT 5, ONNX IR 3. Fix inference bugs.

marcoabreu commented on a change in pull request #13310: [MXNET-703] Update to TensorRT 5, ONNX IR 3. Fix inference bugs.
URL: https://github.com/apache/incubator-mxnet/pull/13310#discussion_r236058387
 
 

 ##########
 File path: src/operator/contrib/nnvm_to_onnx.cc
 ##########
 @@ -382,31 +387,16 @@ void ConvertBatchNorm(NodeProto* node_proto, const NodeAttrs& attrs,
   AttributeProto* const spatial = node_proto->add_attribute();
   spatial->set_name("spatial");
   spatial->set_type(AttributeProto::INT);
-  spatial->set_i(1);
-
-  AttributeProto* const consumed = node_proto->add_attribute();
-  consumed->set_name("consumed_inputs");
-  consumed->set_type(AttributeProto::INTS);
-
-  for (int i = 0; i < 5; i++) {
-    int val = (i < 3) ? 0 : 1;
-    consumed->add_ints(static_cast<int64>(val));
-  }
+  // MXNet computes mean and variance per feature for batchnorm.  Enabling spatial mode
+  // (default in ONNX3) implies running batchnorm on all spatial features so we need to explicitly
+  // disable this for MXNet's BatchNorm.
+  spatial->set_i(0);
 
 Review comment:
   Can we add a test for this in case the default behaviour changes in future releases?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services