You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/06/22 17:32:48 UTC

[GitHub] [tvm] mbrookhart opened a new pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

mbrookhart opened a new pull request #8305:
URL: https://github.com/apache/tvm/pull/8305


   Interestingly, Onnxruntime doesn't actually run this op in integer, they dequantize/run it/requantize. I had to do the same here to get the tests to pass.
   
   Has a kind of crazy way to produce contrib ops from onnx runtime for testing. Let me know what you think.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on a change in pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#discussion_r660469331



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -4640,6 +4641,63 @@ def repeat(N, D):
     )
 
 
+def verify_qlinearadd(a_shape, b_shape, c_shape):
+
+    a_array = np.random.random(a_shape).astype("float32")
+    b_array = np.random.random(b_shape).astype("float32")
+
+    input_nodes = [
+        helper.make_tensor_value_info("a", TensorProto.FLOAT, list(a_shape)),
+        helper.make_tensor_value_info("b", TensorProto.FLOAT, list(b_shape)),
+    ]
+    input_names = [
+        "a",
+        "b",
+    ]
+    input_values = [a_array, b_array]
+
+    node = helper.make_node("QLinearAdd", inputs=input_names, outputs=["C"])
+
+    node = helper.make_node("Add", ["a", "b"], ["C"])
+    graph = helper.make_graph(
+        [node],
+        "qlinearadd_test",
+        inputs=input_nodes,
+        outputs=[helper.make_tensor_value_info("C", TensorProto.FLOAT, list(c_shape))],
+    )
+    model = helper.make_model(graph, producer_name="qlinearconv_test")
+    from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType
+
+    class RandomDataReader(CalibrationDataReader):
+        def __init__(self, n=10):
+            self.data = iter(
+                [
+                    {
+                        "a": np.random.random(a_shape).astype("float32"),
+                        "b": np.random.random(b_shape).astype("float32"),
+                    }
+                    for _ in range(n)
+                ]
+            )
+
+        def get_next(self):
+            return next(self.data, None)
+
+    model_fp32 = "/tmp/model.onnx"

Review comment:
       Must we serialize it into file? Could we use in-memory data structure?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#issuecomment-872841000


   @mbrookhart I think you should make a new pull request of `test_forward`. Because I just find that in ONNX `test_forward`, your `test_qlinearadd` doesn't be called.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#discussion_r661307396



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -4640,6 +4641,63 @@ def repeat(N, D):
     )
 
 
+def verify_qlinearadd(a_shape, b_shape, c_shape):
+
+    a_array = np.random.random(a_shape).astype("float32")
+    b_array = np.random.random(b_shape).astype("float32")
+
+    input_nodes = [
+        helper.make_tensor_value_info("a", TensorProto.FLOAT, list(a_shape)),
+        helper.make_tensor_value_info("b", TensorProto.FLOAT, list(b_shape)),
+    ]
+    input_names = [
+        "a",
+        "b",
+    ]
+    input_values = [a_array, b_array]
+
+    node = helper.make_node("QLinearAdd", inputs=input_names, outputs=["C"])
+
+    node = helper.make_node("Add", ["a", "b"], ["C"])
+    graph = helper.make_graph(
+        [node],
+        "qlinearadd_test",
+        inputs=input_nodes,
+        outputs=[helper.make_tensor_value_info("C", TensorProto.FLOAT, list(c_shape))],
+    )
+    model = helper.make_model(graph, producer_name="qlinearconv_test")
+    from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType
+
+    class RandomDataReader(CalibrationDataReader):
+        def __init__(self, n=10):
+            self.data = iter(
+                [
+                    {
+                        "a": np.random.random(a_shape).astype("float32"),
+                        "b": np.random.random(b_shape).astype("float32"),
+                    }
+                    for _ in range(n)
+                ]
+            )
+
+        def get_next(self):
+            return next(self.data, None)
+
+    model_fp32 = "/tmp/model.onnx"

Review comment:
       I've seen ONNX export using in memory buffer at https://github.com/facebookresearch/detr/blob/master/test_all.py#L136-L138




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#discussion_r660676638



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -4640,6 +4641,63 @@ def repeat(N, D):
     )
 
 
+def verify_qlinearadd(a_shape, b_shape, c_shape):
+
+    a_array = np.random.random(a_shape).astype("float32")
+    b_array = np.random.random(b_shape).astype("float32")
+
+    input_nodes = [
+        helper.make_tensor_value_info("a", TensorProto.FLOAT, list(a_shape)),
+        helper.make_tensor_value_info("b", TensorProto.FLOAT, list(b_shape)),
+    ]
+    input_names = [
+        "a",
+        "b",
+    ]
+    input_values = [a_array, b_array]
+
+    node = helper.make_node("QLinearAdd", inputs=input_names, outputs=["C"])
+
+    node = helper.make_node("Add", ["a", "b"], ["C"])
+    graph = helper.make_graph(
+        [node],
+        "qlinearadd_test",
+        inputs=input_nodes,
+        outputs=[helper.make_tensor_value_info("C", TensorProto.FLOAT, list(c_shape))],
+    )
+    model = helper.make_model(graph, producer_name="qlinearconv_test")
+    from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType
+
+    class RandomDataReader(CalibrationDataReader):
+        def __init__(self, n=10):
+            self.data = iter(
+                [
+                    {
+                        "a": np.random.random(a_shape).astype("float32"),
+                        "b": np.random.random(b_shape).astype("float32"),
+                    }
+                    for _ in range(n)
+                ]
+            )
+
+        def get_next(self):
+            return next(self.data, None)
+
+    model_fp32 = "/tmp/model.onnx"

Review comment:
       I couldn't figure out a way to create the op in memory. :( ONNX's python helper throws a fit since it isn't a standard op, and onnx runtime's quantization API demands files on disk. I wrote it to /tmp so it wouldn't stick around.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #8305:
URL: https://github.com/apache/tvm/pull/8305


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on a change in pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#discussion_r661150086



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -4640,6 +4641,63 @@ def repeat(N, D):
     )
 
 
+def verify_qlinearadd(a_shape, b_shape, c_shape):
+
+    a_array = np.random.random(a_shape).astype("float32")
+    b_array = np.random.random(b_shape).astype("float32")
+
+    input_nodes = [
+        helper.make_tensor_value_info("a", TensorProto.FLOAT, list(a_shape)),
+        helper.make_tensor_value_info("b", TensorProto.FLOAT, list(b_shape)),
+    ]
+    input_names = [
+        "a",
+        "b",
+    ]
+    input_values = [a_array, b_array]
+
+    node = helper.make_node("QLinearAdd", inputs=input_names, outputs=["C"])
+
+    node = helper.make_node("Add", ["a", "b"], ["C"])
+    graph = helper.make_graph(
+        [node],
+        "qlinearadd_test",
+        inputs=input_nodes,
+        outputs=[helper.make_tensor_value_info("C", TensorProto.FLOAT, list(c_shape))],
+    )
+    model = helper.make_model(graph, producer_name="qlinearconv_test")
+    from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType
+
+    class RandomDataReader(CalibrationDataReader):
+        def __init__(self, n=10):
+            self.data = iter(
+                [
+                    {
+                        "a": np.random.random(a_shape).astype("float32"),
+                        "b": np.random.random(b_shape).astype("float32"),
+                    }
+                    for _ in range(n)
+                ]
+            )
+
+        def get_next(self):
+            return next(self.data, None)
+
+    model_fp32 = "/tmp/model.onnx"

Review comment:
       If so, could we use python / tvm's `utils.tempdir()` to handle it? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on a change in pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#discussion_r660469331



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -4640,6 +4641,63 @@ def repeat(N, D):
     )
 
 
+def verify_qlinearadd(a_shape, b_shape, c_shape):
+
+    a_array = np.random.random(a_shape).astype("float32")
+    b_array = np.random.random(b_shape).astype("float32")
+
+    input_nodes = [
+        helper.make_tensor_value_info("a", TensorProto.FLOAT, list(a_shape)),
+        helper.make_tensor_value_info("b", TensorProto.FLOAT, list(b_shape)),
+    ]
+    input_names = [
+        "a",
+        "b",
+    ]
+    input_values = [a_array, b_array]
+
+    node = helper.make_node("QLinearAdd", inputs=input_names, outputs=["C"])
+
+    node = helper.make_node("Add", ["a", "b"], ["C"])
+    graph = helper.make_graph(
+        [node],
+        "qlinearadd_test",
+        inputs=input_nodes,
+        outputs=[helper.make_tensor_value_info("C", TensorProto.FLOAT, list(c_shape))],
+    )
+    model = helper.make_model(graph, producer_name="qlinearconv_test")
+    from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType
+
+    class RandomDataReader(CalibrationDataReader):
+        def __init__(self, n=10):
+            self.data = iter(
+                [
+                    {
+                        "a": np.random.random(a_shape).astype("float32"),
+                        "b": np.random.random(b_shape).astype("float32"),
+                    }
+                    for _ in range(n)
+                ]
+            )
+
+        def get_next(self):
+            return next(self.data, None)
+
+    model_fp32 = "/tmp/model.onnx"

Review comment:
       Must we serialize it into file? Could we use in-memory data structure?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#issuecomment-873030803


   `======== 82 passed, 1122 deselected, 835 warnings in 723.89s (0:12:03) =========`
   
   Something appears to be broken with how CI is running the onnx tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #8305: Support QLinearAdd from onnx runtime com.microsoft contrib ops.

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #8305:
URL: https://github.com/apache/tvm/pull/8305#discussion_r661308454



##########
File path: tests/python/frontend/onnx/test_forward.py
##########
@@ -4640,6 +4641,63 @@ def repeat(N, D):
     )
 
 
+def verify_qlinearadd(a_shape, b_shape, c_shape):
+
+    a_array = np.random.random(a_shape).astype("float32")
+    b_array = np.random.random(b_shape).astype("float32")
+
+    input_nodes = [
+        helper.make_tensor_value_info("a", TensorProto.FLOAT, list(a_shape)),
+        helper.make_tensor_value_info("b", TensorProto.FLOAT, list(b_shape)),
+    ]
+    input_names = [
+        "a",
+        "b",
+    ]
+    input_values = [a_array, b_array]
+
+    node = helper.make_node("QLinearAdd", inputs=input_names, outputs=["C"])
+
+    node = helper.make_node("Add", ["a", "b"], ["C"])
+    graph = helper.make_graph(
+        [node],
+        "qlinearadd_test",
+        inputs=input_nodes,
+        outputs=[helper.make_tensor_value_info("C", TensorProto.FLOAT, list(c_shape))],
+    )
+    model = helper.make_model(graph, producer_name="qlinearconv_test")
+    from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType
+
+    class RandomDataReader(CalibrationDataReader):
+        def __init__(self, n=10):
+            self.data = iter(
+                [
+                    {
+                        "a": np.random.random(a_shape).astype("float32"),
+                        "b": np.random.random(b_shape).astype("float32"),
+                    }
+                    for _ in range(n)
+                ]
+            )
+
+        def get_next(self):
+            return next(self.data, None)
+
+    model_fp32 = "/tmp/model.onnx"

Review comment:
       ah sorry it was PyTorch export...




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org