You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/01/17 00:39:31 UTC

[GitHub] [tvm] jwfromm commented on a change in pull request #7300: [Relay][Frontend][Onnx] Compare against onnxruntime more consistently during testing

jwfromm commented on a change in pull request #7300:
URL: https://github.com/apache/tvm/pull/7300#discussion_r559058323



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -2138,8 +1928,8 @@ def check_torch_conversion(model, input_size):
     # Set verbose=True for more output
     torch.onnx.export(model(), dummy_input, file_name, export_params=True, verbose=False)
     onnx_model = onnx.load(file_name)
-    input_data = np.random.uniform(size=input_size).astype("int32")
-    verify_with_ort_with_inputs(onnx_model, [input_data])
+    input_data = np.random.uniform(size=input_size).astype("float32")
+    verify_with_ort_with_inputs(onnx_model, [input_data], apply_softmax=True)

Review comment:
       This is a fun one that I wanted to point out. Previously we were casting inputs to `int32`, however because they were generated with `np.random.uniform` they all were just being cast to 0. Using non-zero inputs caused some minor mismatch on outputs due to numerical instability but applying softmax (which torchvision models don't use by default) reduces the numerical difference well below our test threshold.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org