You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/08/15 06:26:53 UTC

[GitHub] [incubator-mxnet] Zha0q1 opened a new pull request #18932: [WIP] Nump Ops Large Tensor Tests

Zha0q1 opened a new pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932


   This PR adds large tensor tests to numpy operators.
   
   This is meant for progress sharing and not for merging.
   
   To test run `nosetests --logging-level=DEBUG --verbose -s tests/nightly/test_np_large_array.py:{test_name}`


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit edited a comment on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit edited a comment on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-676703389


   @mxnet-label-bot add [pr-awaiting-merge]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 edited a comment on pull request #18932: [WIP] Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 edited a comment on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675078535


   Let me add in backward tests and also skip the ones that we know will fail (add them back after fixes are merged). Aiming for merging tomorrow


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r472437743



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_add():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)
+    C.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+# this will fail; broadcast needs to be fixed
+# TODO add backward test after forward is fixed
+@use_np
+@pytest.mark.skip(reason='Does not support large tensor; to be fixed')
+def test_add_broadcast():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 1))
+    C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_all():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.all(A)
+    print(B)
+    assert B.asnumpy() == True
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_amin():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[100][1] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.amin(A)
+    print(B)
+    assert B.asnumpy() == -1.0
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_amax():
+    A = np.zeros((INT_OVERFLOW, 2))
+    A[100][1] = 1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.amax(A)
+    print(B)
+    assert B.asnumpy() == 1.0
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)

Review comment:
       bkwrd value check




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r472436893



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_add():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)

Review comment:
       same as above




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 removed a comment on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 removed a comment on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675687761


   The purpose of the tests is to check for int indexing. If an op uses index_t for indexing then it will pass the test. I don't see how going from small to large tensor will break correctness because indexing is not any part of the algorithm. To the ops 100 and 2^31 are the same, both much smaller than 2^63
   
   Those ops already have very comprehensive correctness test coverage


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r473318063



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -78,14 +79,669 @@ def test_softmax():
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
 
-#@pytest.mark.skip(reason="CI hasn't switch to ILP64 OpenBLAS yet")
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+    assert A[0][0] == 1
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+    assert A[0][0] == 0
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[0][0] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    assert B.shape == (INT_OVERFLOW, 2)
+    assert B[0][0] == 1
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == -1
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[0][0] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    assert B.shape == (INT_OVERFLOW, 2)
+    assert B[0][0] == 1
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == -1
+
+@use_np
+@pytest.mark.skip(reason='backward errors out on (2^30,2), gives wrong result \
+    on (2^31, 2)')
+def test_add():
+    INT_OVERFLOW = 2**30
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    assert C.shape == (INT_OVERFLOW, 2)
+    assert C[0][0] == 2
+    C.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == 1
+
+# this will fail; broadcast needs to be fixed
+# TODO add backward test after forward is fixed
+@use_np
+@pytest.mark.skip(reason='Does not support large tensor; to be fixed')
+def test_add_broadcast():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 1))
+    C = np.add(A, B)
+    assert C.shape == (INT_OVERFLOW, 2)
+    assert C[0][0] == 2
+
+@use_np
+def test_all():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.all(A)
+    assert B.asnumpy() == True

Review comment:
       ok .... i will be expecting those changes in next PR then 👍 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675687761


   The purpose of the tests is to check for int indexing. If an op uses index_t for indexing then it will pass the test. I don't see how going from small to large tensor will break correctness because indexing is not any part of the algorithm. Value check might make more sense for linalg operators because some linalg ops depend on indexing type to calculate pivots. But for sin, cos, min, those ops will just go through each value in the tensor


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r473313174



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -78,14 +79,669 @@ def test_softmax():
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
 
-#@pytest.mark.skip(reason="CI hasn't switch to ILP64 OpenBLAS yet")
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+    assert A[0][0] == 1
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+    assert A[0][0] == 0
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[0][0] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    assert B.shape == (INT_OVERFLOW, 2)
+    assert B[0][0] == 1
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == -1
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[0][0] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    assert B.shape == (INT_OVERFLOW, 2)
+    assert B[0][0] == 1
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == -1
+
+@use_np
+@pytest.mark.skip(reason='backward errors out on (2^30,2), gives wrong result \
+    on (2^31, 2)')
+def test_add():
+    INT_OVERFLOW = 2**30
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    assert C.shape == (INT_OVERFLOW, 2)
+    assert C[0][0] == 2
+    C.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == 1
+
+# this will fail; broadcast needs to be fixed
+# TODO add backward test after forward is fixed
+@use_np
+@pytest.mark.skip(reason='Does not support large tensor; to be fixed')
+def test_add_broadcast():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 1))
+    C = np.add(A, B)
+    assert C.shape == (INT_OVERFLOW, 2)
+    assert C[0][0] == 2
+
+@use_np
+def test_all():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.all(A)
+    assert B.asnumpy() == True

Review comment:
       question: even if these are numpy ops the output is not numpy compatible ? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on pull request #18932: [WIP] Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675049225


   Use @mxnet-label-bot to mark your PR as WIP/ready for review/ready to merge


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r473316123



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -78,14 +79,669 @@ def test_softmax():
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
 
-#@pytest.mark.skip(reason="CI hasn't switch to ILP64 OpenBLAS yet")
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+    assert A[0][0] == 1
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+    assert A[0][0] == 0
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[0][0] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    assert B.shape == (INT_OVERFLOW, 2)
+    assert B[0][0] == 1
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == -1
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[0][0] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    assert B.shape == (INT_OVERFLOW, 2)
+    assert B[0][0] == 1
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == -1
+
+@use_np
+@pytest.mark.skip(reason='backward errors out on (2^30,2), gives wrong result \
+    on (2^31, 2)')
+def test_add():
+    INT_OVERFLOW = 2**30
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    assert C.shape == (INT_OVERFLOW, 2)
+    assert C[0][0] == 2
+    C.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+    assert A.grad[0][0] == 1
+
+# this will fail; broadcast needs to be fixed
+# TODO add backward test after forward is fixed
+@use_np
+@pytest.mark.skip(reason='Does not support large tensor; to be fixed')
+def test_add_broadcast():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 1))
+    C = np.add(A, B)
+    assert C.shape == (INT_OVERFLOW, 2)
+    assert C[0][0] == 2
+
+@use_np
+def test_all():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.all(A)
+    assert B.asnumpy() == True

Review comment:
       It is numpy compatible. This is probably left over from my previous version. I removed `asnumpy()` locally and will probably include this fix in the next pr




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r472436666



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)

Review comment:
       value checks for both fwd and bkwrd




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit removed a comment on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit removed a comment on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-676703034


   @mxnet-label-bot add [pr-ready-to-merge]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-676698833


   over night run:
   
   ================================= test session starts ==================================
   platform linux -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
   rootdir: /home/ubuntu/incubator-mxnet, inifile: pytest.ini
   plugins: remotedata-0.3.2, openfiles-0.4.0, astropy-header-0.1.2, hypothesis-5.8.3, arraydiff-0.3, doctestplus-0.5.0
   collected 52 items                                                                     
   
   tests/nightly/test_np_large_array.py .....s..........ss...s..........s...sss.... [ 82%]
   ...s.....                                                                        [100%]
   
   =================================== warnings summary ===================================
   tests/nightly/test_np_large_array.py:89
     /home/ubuntu/incubator-mxnet/tests/nightly/test_np_large_array.py:89: DeprecationWarning: invalid escape sequence \ 
       '''
   
   tests/nightly/test_np_large_array.py:459
     /home/ubuntu/incubator-mxnet/tests/nightly/test_np_large_array.py:459: DeprecationWarning: invalid escape sequence \ 
       '''
   
   -- Docs: https://docs.pytest.org/en/latest/warnings.html
   ====================== 43 passed, 9 skipped, 2 warnings in 1.10s =======================


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit edited a comment on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit edited a comment on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-676703034


   @mxnet-label-bot add [pr-ready-to-merge]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 edited a comment on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 edited a comment on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675687761


   The purpose of the tests is to check for int indexing. If an op uses index_t for indexing then it will pass the test. I don't see how going from small to large tensor will break correctness because indexing is not any part of the algorithm. To the ops 100 and 2^31 are the same, both much smaller than 2^63


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-676703034


   @mxnet-label-bot update [pr-ready-to-merge]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r472437271



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_add():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)
+    C.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+# this will fail; broadcast needs to be fixed
+# TODO add backward test after forward is fixed
+@use_np
+@pytest.mark.skip(reason='Does not support large tensor; to be fixed')
+def test_add_broadcast():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 1))
+    C = np.add(A, B)
+    print(C)

Review comment:
       don't print. add assert checks for single value




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sandeep-krishnamurthy merged pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
sandeep-krishnamurthy merged pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on pull request #18932: [WIP] Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675047970


   @Zha0q1 paste the output using pytest run and not nosetests. Get these ops merged. You can add new ops to a different PR.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r472437632



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_add():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)
+    C.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+# this will fail; broadcast needs to be fixed
+# TODO add backward test after forward is fixed
+@use_np
+@pytest.mark.skip(reason='Does not support large tensor; to be fixed')
+def test_add_broadcast():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 1))
+    C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_all():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.all(A)
+    print(B)
+    assert B.asnumpy() == True
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)

Review comment:
       bkwrd value check

##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_add():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)
+    C.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+# this will fail; broadcast needs to be fixed
+# TODO add backward test after forward is fixed
+@use_np
+@pytest.mark.skip(reason='Does not support large tensor; to be fixed')
+def test_add_broadcast():
+    A = np.ones((INT_OVERFLOW, 2))
+    B = np.ones((INT_OVERFLOW, 1))
+    C = np.add(A, B)
+    print(C)
+    assert C.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_all():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.all(A)
+    print(B)
+    assert B.asnumpy() == True
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_amin():
+    A = np.ones((INT_OVERFLOW, 2))
+    A[100][1] = -1
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.amin(A)
+    print(B)
+    assert B.asnumpy() == -1.0
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)

Review comment:
       bkwrd value check




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-676703389


   @mxnet-label-bot add [pr-ready-to-merge]


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r472436354



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)

Review comment:
       Can you also add checks for single value at [0] or [-1]

##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)

Review comment:
       values checks 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#discussion_r472436783



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -76,3 +78,459 @@ def test_softmax():
         true_output = np.full((SMALL_Y, LARGE_X), (1 / input_data.shape[axis]))
         output = npx.softmax(input_data, axis=axis)
         assert_almost_equal(output.asnumpy(), true_output, rtol=1e-5, atol=1e-5)
+
+'''
+  _ _ _  _ _ __  _ __ _  _
+ | ' \ || | '  \| '_ \ || |
+ |_||_\_,_|_|_|_| .__/\_, |
+                |_|   |__/
+'''
+
+@use_np
+def test_ones():
+    A = np.ones((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_zeros():
+    A = np.zeros((INT_OVERFLOW, 2))
+    assert A.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_abs():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.abs(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)
+    B.backward()
+    assert A.grad.shape == (INT_OVERFLOW, 2)
+
+@use_np
+def test_absolute():
+    A = np.ones((INT_OVERFLOW, 2))
+    A.attach_grad()
+    with mx.autograd.record():
+        B = np.absolute(A)
+    print(B)
+    assert B.shape == (INT_OVERFLOW, 2)

Review comment:
       same as above
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on pull request #18932: [WIP] Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675078535


   Let me add in backward tests and also skip the ones that we know will fail. Aiming for mergeing tomorrow


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 edited a comment on pull request #18932: Numpy Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
Zha0q1 edited a comment on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-675687761


   The purpose of the tests is to check for int indexing. If an op uses index_t for indexing then it will pass the test. I don't see how going from small to large tensor will break correctness because indexing is not any part of the algorithm. To the ops 100 and 2^31 are the same, both much smaller than 2^63
   
   Those ops already have very comprehensive correctness test coverage


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #18932: [WIP] Nump Ops Large Tensor Tests

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #18932:
URL: https://github.com/apache/incubator-mxnet/pull/18932#issuecomment-674356961


   Hey @Zha0q1 , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [clang, miscellaneous, unix-gpu, website, windows-cpu, sanity, centos-gpu, edge, unix-cpu, windows-gpu, centos-cpu]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org