You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/05/20 03:07:16 UTC

[GitHub] [incubator-mxnet] waytrue17 opened a new pull request #20283: [v1.x] ONNX export for large model

waytrue17 opened a new pull request #20283:
URL: https://github.com/apache/incubator-mxnet/pull/20283


   ## Description ##
   Add ONNX export for large model (currently it fails due to protobuf 2GB limit). When `large_model=True`, the param tensors will be saved into separate data files along with the onnx model file, which allows large models to export.
   
   ## Checklist ##
   ### Essentials ###
   - [ ] PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] Code is well-documented
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] MoisesHer commented on a change in pull request #20283: [v1.x] ONNX export for large model

Posted by GitBox <gi...@apache.org>.
MoisesHer commented on a change in pull request #20283:
URL: https://github.com/apache/incubator-mxnet/pull/20283#discussion_r637181015



##########
File path: python/mxnet/onnx/README.md
##########
@@ -63,6 +63,9 @@ Parameters:
         This is the old name of in_types. We keep this parameter name for backward compatibility
     input_shape : List of tuple
         This is the old name of in_shapes. We keep this parameter name for backward compatibility
+    large_model : Boolean
+        Whether to export a model that is larger than 2 GB. If true will save param tensors in seperate

Review comment:
       typo: seperate -> separate

##########
File path: python/mxnet/onnx/mx2onnx/_export_model.py
##########
@@ -83,6 +83,9 @@ def export_model(sym, params, in_shapes=None, in_types=np.float32,
         This is the old name of in_types. We keep this parameter name for backward compatibility
     input_shape : List of tuple
         This is the old name of in_shapes. We keep this parameter name for backward compatibility
+    large_model : Boolean
+        Whether to export a model that is larger than 2 GB. If true will save param tensors in seperate

Review comment:
       typo:  seperate -> separate

##########
File path: python/mxnet/onnx/README.md
##########
@@ -75,6 +78,9 @@ When the model has multiple inputs, all the input shapes and dtypes must be prov
 #### Dynamic Shape Input
 We can set `dynamic=True` to turn on support for dynamic input shapes. Note that even with dynamic shapes, a set of static input shapes still need to be specified in `in_shapes`; on top of that, we'll also need to specify which dimensions of the input shapes are dynamic in `dynamic_input_shapes`. We can simply set the dynamic dimensions as `None`, e.g. `(1, 3, None, None)`, or use strings in place of the `None`'s for better understandability in the exported onnx graph, e.g. `(1, 3, 'Height', 'Width')`
 
+#### Export Large Model
+Uses can set `large_model=True` to exoprt models that are larger than 2GB. In this case, all parameter tensors will be saved into seperate files along with the .onnx model file.

Review comment:
       typo: exoprt -> export, seperate -> separate




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] waytrue17 commented on a change in pull request #20283: [v1.x] ONNX export for large model

Posted by GitBox <gi...@apache.org>.
waytrue17 commented on a change in pull request #20283:
URL: https://github.com/apache/incubator-mxnet/pull/20283#discussion_r637220064



##########
File path: python/mxnet/onnx/README.md
##########
@@ -63,6 +63,9 @@ Parameters:
         This is the old name of in_types. We keep this parameter name for backward compatibility
     input_shape : List of tuple
         This is the old name of in_shapes. We keep this parameter name for backward compatibility
+    large_model : Boolean
+        Whether to export a model that is larger than 2 GB. If true will save param tensors in seperate

Review comment:
       Fixed

##########
File path: python/mxnet/onnx/README.md
##########
@@ -75,6 +78,9 @@ When the model has multiple inputs, all the input shapes and dtypes must be prov
 #### Dynamic Shape Input
 We can set `dynamic=True` to turn on support for dynamic input shapes. Note that even with dynamic shapes, a set of static input shapes still need to be specified in `in_shapes`; on top of that, we'll also need to specify which dimensions of the input shapes are dynamic in `dynamic_input_shapes`. We can simply set the dynamic dimensions as `None`, e.g. `(1, 3, None, None)`, or use strings in place of the `None`'s for better understandability in the exported onnx graph, e.g. `(1, 3, 'Height', 'Width')`
 
+#### Export Large Model
+Uses can set `large_model=True` to exoprt models that are larger than 2GB. In this case, all parameter tensors will be saved into seperate files along with the .onnx model file.

Review comment:
       Fixed

##########
File path: python/mxnet/onnx/mx2onnx/_export_model.py
##########
@@ -83,6 +83,9 @@ def export_model(sym, params, in_shapes=None, in_types=np.float32,
         This is the old name of in_types. We keep this parameter name for backward compatibility
     input_shape : List of tuple
         This is the old name of in_shapes. We keep this parameter name for backward compatibility
+    large_model : Boolean
+        Whether to export a model that is larger than 2 GB. If true will save param tensors in seperate

Review comment:
       Fixed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] waytrue17 commented on pull request #20283: [v1.x] ONNX export for large model

Posted by GitBox <gi...@apache.org>.
waytrue17 commented on pull request #20283:
URL: https://github.com/apache/incubator-mxnet/pull/20283#issuecomment-846289440


   > apart of typos, looks good to me
   
   Thanks for catching the typos! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20283: [v1.x] ONNX export for large model

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20283:
URL: https://github.com/apache/incubator-mxnet/pull/20283#issuecomment-844649392


   Hey @waytrue17 , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [centos-gpu, unix-gpu, windows-cpu, centos-cpu, website, sanity, edge, miscellaneous, windows-gpu, unix-cpu, clang]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] MoisesHer merged pull request #20283: [v1.x] ONNX export for large model

Posted by GitBox <gi...@apache.org>.
MoisesHer merged pull request #20283:
URL: https://github.com/apache/incubator-mxnet/pull/20283


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org