You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/07/29 08:01:41 UTC

[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #18807: adding error message when attempting to use Large tensor with linalg_syevd

Zha0q1 commented on a change in pull request #18807:
URL: https://github.com/apache/incubator-mxnet/pull/18807#discussion_r461845972



##########
File path: 3rdparty/mshadow/mshadow/base.h
##########
@@ -336,6 +336,8 @@ const float kPi = 3.1415926f;
   typedef int32_t index_t;
 #endif
 
+const index_t kInt32Limit = (int64_t{1} << 31) - 1;

Review comment:
       what's the advantage of manually defining this over using INT_MAX in <climits>?

##########
File path: 3rdparty/mshadow/mshadow/base.h
##########
@@ -336,6 +336,8 @@ const float kPi = 3.1415926f;
   typedef int32_t index_t;
 #endif
 
+const index_t kInt32Limit = (int64_t{1} << 31) - 1;

Review comment:
       what's the advantage of manually defining this over using INT_MAX in <climits?

##########
File path: src/operator/tensor/la_op.h
##########
@@ -470,11 +470,14 @@ inline bool DetType(const nnvm::NodeAttrs& attrs,
 inline bool LaEigFactShape(const nnvm::NodeAttrs& attrs,
                            mxnet::ShapeVector* in_attrs,
                            mxnet::ShapeVector* out_attrs) {
+  using namespace mshadow;
   CHECK_EQ(in_attrs->size(), 1);
   CHECK_EQ(out_attrs->size(), 2);
   const mxnet::TShape& in_a = (*in_attrs)[0];
   const mxnet::TShape& out_u = (*out_attrs)[0];
   const mxnet::TShape& out_l = (*out_attrs)[1];
+  CHECK_LE(in_a.Size(), INT_MAX)

Review comment:
       did you include climit ?

##########
File path: src/operator/tensor/la_op.h
##########
@@ -470,11 +470,14 @@ inline bool DetType(const nnvm::NodeAttrs& attrs,
 inline bool LaEigFactShape(const nnvm::NodeAttrs& attrs,
                            mxnet::ShapeVector* in_attrs,
                            mxnet::ShapeVector* out_attrs) {
+  using namespace mshadow;

Review comment:
       this can be removed then.

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,16 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_errors():
+    def check_syevd_error():
+        A = get_identity_mat(LARGE_SQ_X)
+        for i in range(LARGE_SQ_X):
+            A[i,i] = 1
+        assertRaises(MXNetError, mx.nd.linalg.syevd, A)

Review comment:
       I think the error is raised when the operator is executed rather than when it's pushed. So this assertion will not catch an error. 
   
   ```
   def test_linalg_large_dim():
       def check_gemm():
           A = mx.nd.ones(shape=(1, 2**31))
           B = mx.nd.ones(shape=(1, 2**31))
           C = mx.nd.ones(shape=(1,1))
           D = mx.nd.linalg.gemm(A, B, C, transpose_b=True, alpha=2.0 , beta=10.0)
           nd.waitall()
       assertRaises(MXNetError, check_gemm)
   ```
   
   This worked for me. We need to wrap an additional layer and call nd.waitall() in that layer




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org