You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/07/29 07:39:23 UTC

[GitHub] [incubator-mxnet] Zha0q1 opened a new pull request #18816: [WIP] Add Large Dim Checks for linalg Operators

Zha0q1 opened a new pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816


   Add Large Dim Checks for linalg Operators. Although external blas libraries support large tensors (>2^32 sized), large dimensions (>= 2^31) will trigger an openblas int overflow error under current configuration. This PR adds checks to exit those use cases properly.
   
   Done:
   1. gemm and gemm2
   
   TODO:
   rest of linalg operators
   
   
   
   
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #18816: [WIP] Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#issuecomment-665421092


   Hey @Zha0q1 , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [unix-gpu, centos-cpu, windows-cpu, unix-cpu, sanity, centos-gpu, website, edge, clang, windows-gpu, miscellaneous]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #18816: [WIP] Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462028540



##########
File path: src/operator/tensor/la_op.h
##########
@@ -181,6 +189,21 @@ inline bool LaMatrixMultMacOpShape(const nnvm::NodeAttrs& attrs,
     const int ndim((*in_attrs)[0].ndim()), axis(axis_param < 0 ? ndim + axis_param : axis_param);
     CHECK(axis >= 0 && axis < ndim-1)
       << "Invalid row axis (" << axis_param << ")";
+    // check if any dim is too large
+    check_large_dim({(*in_attrs)[0][axis],
+		     (*in_attrs)[0][ndim-1],
+		     (*in_attrs)[1][axis],
+		     (*in_attrs)[1][ndim-1]});
+    /*
+    CHECK_LE((*in_attrs)[0][axis], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";
+    CHECK_LE((*in_attrs)[0][ndim-1], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;
+    CHECK_LE((*in_attrs)[1][axis], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;
+    CHECK_LE((*in_attrs)[1][ndim-1], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;

Review comment:
       This is only for the sake of discussion. Hope you would agree that the above style (using a helper function) is better.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18816: [WIP] Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462427748



##########
File path: src/operator/tensor/la_op.h
##########
@@ -157,6 +157,14 @@ struct LaTrianParam : public dmlc::Parameter<LaTrianParam> {
   }
 };
 
+// check if any dim will overflow 32-bit int

Review comment:
       Neat !

##########
File path: src/operator/tensor/la_op.h
##########
@@ -181,6 +189,21 @@ inline bool LaMatrixMultMacOpShape(const nnvm::NodeAttrs& attrs,
     const int ndim((*in_attrs)[0].ndim()), axis(axis_param < 0 ? ndim + axis_param : axis_param);
     CHECK(axis >= 0 && axis < ndim-1)
       << "Invalid row axis (" << axis_param << ")";
+    // check if any dim is too large

Review comment:
       Please remove comments before merging. Rest LGTM !

##########
File path: src/operator/tensor/la_op.h
##########
@@ -181,6 +189,21 @@ inline bool LaMatrixMultMacOpShape(const nnvm::NodeAttrs& attrs,
     const int ndim((*in_attrs)[0].ndim()), axis(axis_param < 0 ? ndim + axis_param : axis_param);
     CHECK(axis >= 0 && axis < ndim-1)
       << "Invalid row axis (" << axis_param << ")";
+    // check if any dim is too large
+    check_large_dim({(*in_attrs)[0][axis],
+		     (*in_attrs)[0][ndim-1],
+		     (*in_attrs)[1][axis],
+		     (*in_attrs)[1][ndim-1]});
+    /*
+    CHECK_LE((*in_attrs)[0][axis], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";
+    CHECK_LE((*in_attrs)[0][ndim-1], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;
+    CHECK_LE((*in_attrs)[1][axis], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;
+    CHECK_LE((*in_attrs)[1][ndim-1], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;

Review comment:
       Please remove comments before merging. Rest LGTM !




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
ChaiBapchya commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462660405



##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       this should go to test_large_vector since the input contains 1 dimension which has large while rest dimensions are small. @access2rohit plz confirm

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Name of the file can be made better.
   Basically the idea was to have 2 separate files
   
   1. test_large_array.py [more like large size]
   testing input whose individual dimensions are less than `2**32` but size of the input is > `2**32`
   
   2. test_large_vector.py [more like large "shape"]
   testing input whose atleast 1 individual dimensions is > `2**32`

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Name of the file can be made better.
   Basically the idea was to have 2 separate files
   
   1. test_large_array.py [more like test_large_**size**.py]
   testing input whose individual dimensions are less than `2**32` but size of the input is > `2**32`
   
   2. test_large_vector.py [more like test_large_**shape**.py]
   testing input whose atleast 1 individual dimensions is > `2**32`

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       From consistency standpoint, I'd 
   1. put these testss in test_large_vector.py file. 
   2. Rename that file to [whatever sounds right I just gave a suggestion above]
   3. add a comment in that file.
   
   All dimensions in test_large_array.py are less than INT32_MAX
   Large dimension [>`2**32`] was introduced in test_large_vector.py for the same reason.
   
   So to keep the testing approach consistent I'd do that. 
   
   Even if both files play with "Large tensors" one does it for large "size" other specifically for large "shape".

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       From consistency standpoint, I'd 
   1. put these tests in test_large_vector.py file. 
   2. Rename that file to [whatever sounds right I just gave a suggestion above]
   3. add a comment in that file.
   
   All dimensions in test_large_array.py are less than INT32_MAX
   Large dimension [>`2**32`] was introduced in test_large_vector.py for the same reason.
   
   So to keep the testing approach consistent I'd do that. 
   
   Even if both files play with "Large tensors" one does it for large "size" other specifically for large "shape".

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Yes I know they aren't "vectors" and hence recommended "renaming the file" 

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       If we keep these tests in this file, it defeats the purpose of few tests in test_large_vector.py
   
   https://github.com/apache/incubator-mxnet/blob/6bbd53107aa16fc41e8d462cf5dc46fb70d592df/tests/nightly/test_large_vector.py#L99-L126




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #18816: [WIP] Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462602126



##########
File path: src/operator/tensor/la_op.h
##########
@@ -181,6 +189,21 @@ inline bool LaMatrixMultMacOpShape(const nnvm::NodeAttrs& attrs,
     const int ndim((*in_attrs)[0].ndim()), axis(axis_param < 0 ? ndim + axis_param : axis_param);
     CHECK(axis >= 0 && axis < ndim-1)
       << "Invalid row axis (" << axis_param << ")";
+    // check if any dim is too large
+    check_large_dim({(*in_attrs)[0][axis],
+		     (*in_attrs)[0][ndim-1],
+		     (*in_attrs)[1][axis],
+		     (*in_attrs)[1][ndim-1]});
+    /*
+    CHECK_LE((*in_attrs)[0][axis], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";
+    CHECK_LE((*in_attrs)[0][ndim-1], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;
+    CHECK_LE((*in_attrs)[1][axis], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;
+    CHECK_LE((*in_attrs)[1][ndim-1], INT_MAX)
+      << "Large matrix dimensions (>= 2^31) are not supported";;

Review comment:
       yep

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Maybe we should make more explicit comments on what those test do? I can do this in my next commit. I do think the two cases still both fall into the same category which is testing with tensors of large dimensions.

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Maybe we should make more explicit comments on what those test do? I can do this in my next commit. I still think the two cases still both fall into the same category which is testing with tensors of large dimensions.

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Maybe we should make more explicit comments on what those test do? I can do so in my next commit. I still think the two cases still both fall into the same category which is testing with tensors of large dimensions.

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Maybe we should make more explicit comments on what those test do? I can do so in my next commit. I still think the two cases both fall into the same category which is testing with tensors of large dimensions.

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       In comments I would say like 1. those tests are for overflowing total size 2. those other tests are for overflowing index calculation i.e. row dim, col dim, etc

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       well one thing to note is that they are not vectors per se. The inputs are all 3D where as in test_large_vector they are all 1D. The dim checks happen on both row and col dim so you can see I used both (1, 1, x) and (1, x, 1). 
   
   I will add more comments in next commit

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       well one thing to note is that they are not vectors per se. The inputs are all 3D whereas in test_large_vector they are all 1D. The dim checks happen on both row and col dim so you can see I used both (1, 1, x) and (1, x, 1). 
   
   I will add more comments in next commit




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462769587



##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       @Zha0q1 the vector file generally houses tests for operators with a single dimension that exceeds 2^32 range. Please address what is suggested by @ChaiBapchya and move it to vector tests. Feel free to rename the file to `test_large_dimensions.py`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462769587



##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       @Zha0q1 We vector file generally houses tests for operators with a single dimension that exceeds 2^32 range. Kindly address what is suggested by @ChaiBapchya and move it to vector tests. 

##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       @Zha0q1 We vector file generally houses tests for operators with a single dimension that exceeds 2^32 range. Please address what is suggested by @ChaiBapchya and move it to vector tests. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462769587



##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       @Zha0q1 the vector file generally houses tests for operators with a single dimension that exceeds 2^32 range. Please address what is suggested by @ChaiBapchya and move it to vector tests. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#issuecomment-666008268


   > Functionality-wise looks good to me. Had other thoughts about "where" this test should be placed. Feel free to disagree & merge.
   > Looks good other than that. Thanks!
   
   Thanks!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r463180088



##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       Yeah that would make sense. I have moved the tests to test_large_vector.py, which I kept the original name to avoid naming discrepancy with master. Also comments were added to the tests




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sandeep-krishnamurthy merged pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
sandeep-krishnamurthy merged pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #18816: Add Large Dim Checks for linalg Operators

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #18816:
URL: https://github.com/apache/incubator-mxnet/pull/18816#discussion_r462769587



##########
File path: tests/nightly/test_large_array.py
##########
@@ -1350,6 +1351,50 @@ def run_trsm(inp):
     check_batch_trsm()
 
 
+def test_linalg_large_dim():
+    def check_gemm():
+        A = nd.ones(shape=(1, INT32_MAX + 1, 1))

Review comment:
       @Zha0q1 the vector file generally houses tests for operators with a single dimension that exceeds 2^32 range. Please address what is suggested by @ChaiBapchya and move it to vector tests. Feel free to rename it to `test_large_dimensions.py`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org