You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/08/31 18:59:37 UTC

[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #19033: Numpy RNN operator large dim checks

sxjscience commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480326711



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       You can consider to revise it like what I have suggested. You generate 20 random numbers from uniform(-1, 1) and create a small array to store these numbers and cache it in the python script. Then, you cyclically use these numbers to construct the large tensor.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org