You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/08/28 23:11:52 UTC

[GitHub] [incubator-mxnet] Zha0q1 opened a new pull request #19033: Numpy RNN operator large dim checks

Zha0q1 opened a new pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033


   This pr adds large dim checks for npx.rnn
   
   Each dim cannot be >= 2^31, but it's fine for the total data size to exceed that limit. I have updated the test cases as well to reflect a somewhat plausible large tensor use case
   
   I have run the four tests locally on my machine, sample result:
   
   ```
   ubuntu@ip-172-31-38-169:~/incubator-mxnet$ pytest tests/nightly/test_np_large_array.py::test_rnn_vanilla
   /home/ubuntu/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
     return f(*args, **kwds)
   /home/ubuntu/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
     return f(*args, **kwds)
   /home/ubuntu/anaconda3/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
     return f(*args, **kwds)
   ============================================ test session starts =============================================
   platform linux -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
   rootdir: /home/ubuntu/incubator-mxnet, inifile: pytest.ini
   plugins: remotedata-0.3.2, openfiles-0.4.0, astropy-header-0.1.2, hypothesis-5.8.3, arraydiff-0.3, doctestplus-0.5.0
   collected 1 item                                                                                             
   
   tests/nightly/test_np_large_array.py                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    .                                                                 [100%]
   
   ======================================= 1 passed in 336.44s (0:05:36) ======================================
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683967309


   > > To start with, we may first check the gradient of smaller workloads by calculating the finite difference: #19045. We can pregenerate the states and inputs to avoid flaky test.
   > 
   > Just a random thought: since we would be comparing numerical and analytical gradient results, I think there might be some requirements on the precision/scale of inputs to limit the error?
   
   Usually, the `eps` in finite difference test is a tunable parameter. `1E-2` is usually a good number for that (according to my experience). In order to avoid flaky test, we need to ensure that the states + inputs are generated and fixed. One choice is to generate and cache 10 or 20 numbers, and generate the large tensor by cyclically using these generated numbers. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683953886


   > Previously the tests related to RNN are in https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_gluon_rnn.py and there is a gradient check. We may do it later but I'm thinking that large shape may fail the gradient check due to some numerical issues.
   
   I think the gradient check there only checks the consistency between slightly different runs of rnn. There is no check for absolute correctness. For large tensor rnn what gradient checks would you suggest?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683935077


   > Should we add some gradient check for the RNN + Large Tensor? (Although it may take some time).
   
   I think it might also be hard to calculate a reference to compare out gradient with. Even the existing code does not seem to be testing for that (or maybe I missed it)


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 edited a comment on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 edited a comment on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683964791


   > To start with, we may first check the gradient of smaller workloads by calculating the finite difference: #19045. We can pregenerate the states and inputs to avoid flaky test.
   
   Just a random thought: since we would be comparing numerical and analytical gradient results, I think there might be some requirements on the precision/scale of inputs to limit the error? We might want to try this approach out on some simple ops first


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683921735


   Should we add some gradient check the RNN + Large Tensor? (Although it may take some time).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit removed a comment on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
access2rohit removed a comment on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683938756


   Left 1 comment. Rest LGTM !


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480333070



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       > You can consider to revise it like what I have suggested. You generate 20 random numbers from uniform(-1, 1) and create a small array to store these numbers and cache it in the python script. Then, you cyclically use these numbers to construct the large tensor.
   
   Right this would speed up testing, but we are still unable to test correctness without a ground truth. Outside the scope of this pr I definitely agree that more correctness tests can be added; there if we start with smaller tensors then we can just use a fixed seed to begin with. We can see the need for progressing to testing large tensors then




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480333070



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       > You can consider to revise it like what I have suggested. You generate 20 random numbers from uniform(-1, 1) and create a small array to store these numbers and cache it in the python script. Then, you cyclically use these numbers to construct the large tensor.
   
   Right this would speed up testing, but we are still unable to test correctness without a ground truth. Outside the scope of this pr I definitely agree that more correctness tests can be added; there if we start with smaller tensors then we can just use a fixed seed to begin with. Maybe we can see the need for progressing to testing large tensors then




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480326711



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       You can consider to revise it like what I have suggested. You generate 20 random numbers from uniform(-1, 1) and create a small array to store these numbers and cache it in the python script. Then, you cyclically use these numbers to construct the large tensor.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
access2rohit commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683938756


   Left 1 comment. Rest LGTM !


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480315341



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       If we initialized the data and params to 1 then the output will explode very fast; on the contrary if we initialize them to 0 then the output will be all zeros. In practice neither is an actual use case. I think testing testing rnn is tricky in that it's hard to establish a numerically correct reference. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683958384


   To start with, we may first check the gradient of smaller workloads by calculating the finite difference: https://github.com/apache/incubator-mxnet/issues/19045. We can pregenerate the states and inputs to avoid flaky test.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] mxnet-bot commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
mxnet-bot commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683184321


   Hey @Zha0q1 , Thanks for submitting the PR 
   All tests are already queued to run once. If tests fail, you can trigger one or more tests again with the following commands: 
   - To trigger all jobs: @mxnet-bot run ci [all] 
   - To trigger specific jobs: @mxnet-bot run ci [job1, job2] 
   *** 
   **CI supported jobs**: [centos-cpu, edge, windows-gpu, windows-cpu, website, miscellaneous, unix-cpu, unix-gpu, centos-gpu, sanity, clang]
   *** 
   _Note_: 
    Only following 3 categories can trigger CI :PR Author, MXNet Committer, Jenkins Admin. 
   All CI tests must pass before the PR can be merged. 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480296732



##########
File path: src/operator/rnn.cc
##########
@@ -69,6 +69,10 @@ static bool RNNShape(const nnvm::NodeAttrs& attrs,
   CHECK_EQ(dshape.ndim(), 3U) \
       << "Input data should be rank-3 tensor of dim [sequence length, batch size, input size]";
   // data: [sequence len, batch, input dimension]
+  for (int i = 0; i < dshape.ndim(); i++) {
+    CHECK_LT(dshape[i], INT32_MAX) << "ValueError: RNN does not support large"

Review comment:
       can you also point out which dimension gives error. This will be helpful for users to understand where they need to do correction




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683957451


   > > Previously the tests related to RNN are in https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_gluon_rnn.py and there is a gradient check. We may do it later but I'm thinking that large shape may fail the gradient check due to some numerical issues.
   > 
   > I think the gradient check there only checks the consistency between slightly different runs of rnn. There is no check for absolute correctness. For large tensor rnn what gradient checks would you suggest?
   
   I see... So we haven't checked the gradient of RNN. We can consider to add some finite difference checks later to avoid future problems of RNN layers. This may be out-of-the-scope of this PR. @Zha0q1 Would you be interested to take a look?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience merged pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience merged pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480333070



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       > You can consider to revise it like what I have suggested. You generate 20 random numbers from uniform(-1, 1) and create a small array to store these numbers and cache it in the python script. Then, you cyclically use these numbers to construct the large tensor.
   
   Right this would speed up testing, but we are still unable to test correctness without a ground truth. Out side the scope of this pr I definitely agree that more correctness tests can be added; there if we start with smaller tensors then we can just use a fixed seed to begin with. We can see the need for progressing to testing large tensors then




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480298290



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       Why random input? Because then we are forced to use npx.waitall(). IS is not possible to use fixed input for certain cells and expect particular output at those locations ?

##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       Why random input? Because then we are forced to use npx.waitall(). Is is not possible to use fixed input for certain cells and expect particular output at those locations ?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience edited a comment on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience edited a comment on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683921735


   Should we add some gradient check for the RNN + Large Tensor? (Although it may take some time).


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
access2rohit commented on a change in pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#discussion_r480298290



##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       Why random input. Because then we are forced to use npx.waitall()

##########
File path: tests/nightly/test_np_large_array.py
##########
@@ -875,34 +875,65 @@ def test_digamma():
                 rtol=1e-3, atol=1e-5)
 
 @use_np
-@pytest.mark.skip(reason='broken on large tensors; also backward errors out')
-def test_rnn():
+def test_rnn_dim_check():
+    L_SEQ, BAT, L_INP, L_STA = 2**31, 4, 2**10, 2
+    data = np.random.uniform(-1, 1, (L_SEQ, BAT, L_INP))

Review comment:
       Why random input? Because then we are forced to use npx.waitall()




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] Zha0q1 commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
Zha0q1 commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683964791


   > To start with, we may first check the gradient of smaller workloads by calculating the finite difference: #19045. We can pregenerate the states and inputs to avoid flaky test.
   
   Just a random thought: since we would be comparing numerical and analytical gradient results, I think there might be some requirements on the precision/scale of inputs to limit the error?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-mxnet] sxjscience commented on pull request #19033: Numpy RNN operator large dim checks

Posted by GitBox <gi...@apache.org>.
sxjscience commented on pull request #19033:
URL: https://github.com/apache/incubator-mxnet/pull/19033#issuecomment-683936777


   Previously the tests related to RNN are in https://github.com/apache/incubator-mxnet/blob/master/tests/python/unittest/test_gluon_rnn.py and there is a gradient check. We may do it later but I'm thinking that large shape may fail the gradient check due to some numerical issues.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org