You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/07/13 16:26:35 UTC

[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #6022: Add support for checking big ONNX models

jwfromm commented on a change in pull request #6022:
URL: https://github.com/apache/incubator-tvm/pull/6022#discussion_r453773341



##########
File path: python/tvm/relay/frontend/onnx.py
##########
@@ -2254,8 +2255,13 @@ def from_onnx(model,
 
     Parameters
     ----------
-    model : protobuf object
-        ONNX ModelProto after ONNX v1.1.0
+    model_path : str
+        The path of the ONNX model

Review comment:
       I recommend allowing the `model` argument to be either an Onnx ModelProto or a string pointing to the saved Onnx file. You can then add a check to see if its a string and do `onnx.load` and `load_external_data_for_model`. Otherwise parse the OnnxProto as normal. This will make it a lot easier to deal with the existing TVM codebase and allow your tests to pass.

##########
File path: python/tvm/relay/frontend/onnx.py
##########
@@ -2239,7 +2239,8 @@ def _fix_outputs(self, op_name, outputs):
             outputs = outputs[:-1]
         return outputs
 
-def from_onnx(model,
+def from_onnx(model_path=None,
+              external_data_files_path=None,

Review comment:
       Put `external_data_files_path` further down (ideally last) in the argument list. All the importers have the first two arguments as model, shape_dict and it'd be nice to keep this consistent.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org