You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2021/04/14 23:29:17 UTC

[GitHub] [incubator-mxnet] waytrue17 opened a new pull request #20165: [v1.x] ONNX fix operator batch1

waytrue17 opened a new pull request #20165:
URL: https://github.com/apache/incubator-mxnet/pull/20165


   ## Description ##
   Rewrite `clip` to support more input data type
   Rewrite `_rminus_scalar` and `_rdiv_scalar` to fix assertion error
   Rewrite `argmax` and `argmin` to fix issue when `axis=None`
   Add unittest for 9 operators
   
   ## Checklist ##
   ### Essentials ###
   - [ ] PR's title starts with a category (e.g. [BUGFIX], [MODEL], [TUTORIAL], [FEATURE], [DOC], etc)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage
   - [ ] Code is well-documented
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] szha merged pull request #20165: [v1.x] ONNX legacy operator fix and test

Posted by GitBox <gi...@apache.org>.
szha merged pull request #20165:
URL: https://github.com/apache/incubator-mxnet/pull/20165


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #20165: [v1.x] ONNX fix operator batch1

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #20165:
URL: https://github.com/apache/incubator-mxnet/pull/20165#issuecomment-819914305


   Hey @waytrue17 , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [miscellaneous, unix-cpu, windows-gpu, edge, centos-gpu, sanity, clang, unix-gpu, website, centos-cpu, windows-cpu]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #20165: [v1.x] ONNX fix operator batch1

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #20165:
URL: https://github.com/apache/incubator-mxnet/pull/20165#discussion_r614373330



##########
File path: tests/python-pytest/onnx/test_operators.py
##########
@@ -1275,3 +1287,42 @@ def test_onnx_export_contrib_div_sqrt_dim(tmp_path, dtype, shape):
     A = mx.nd.random.uniform(-100, 100, shape).astype(dtype)
     M = def_model('contrib.div_sqrt_dim')
     op_export_test('contrib_div_sqrt_dim', M, [A], tmp_path)
+
+
+# onnxruntime currently does not support int32
+@pytest.mark.parametrize('dtype', ['float16', 'float32', 'int64'])
+@pytest.mark.parametrize('shape', [(1,), (2, 3), (4, 5, 6)])
+def test_onnx_export_clip(tmp_path, dtype, shape):
+    A = mx.nd.random.uniform(-100, 100, shape).astype(dtype)
+    a_min = mx.nd.min(A).astype('float32').asnumpy()[0] + 5
+    a_max = mx.nd.max(A).astype('float32').asnumpy()[0] - 5
+    print(a_min)
+    M = def_model('clip', a_min=a_min, a_max=a_max)
+    op_export_test('clip', M, [A], tmp_path)
+
+
+@pytest.mark.parametrize('dtype', ['float16', 'float32', 'int32', 'int64'])
+@pytest.mark.parametrize('shape', [(3, 4, 5), (6, 7), (8,)])
+@pytest.mark.parametrize('func', [lambda x : x + np.random.rand(1)[0]*100,
+                                  lambda x : x * np.random.rand(1)[0]*100,

Review comment:
       Nice!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org