You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/09 23:46:15 UTC

[GitHub] [tvm] mbrookhart opened a new pull request #7074: Fix QNN type inference

mbrookhart opened a new pull request #7074:
URL: https://github.com/apache/tvm/pull/7074


   @masahi @anijain2305 
   
   Fix for #7067 
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539766653



##########
File path: tests/python/frontend/pytorch/qnn_test.py
##########
@@ -32,17 +32,58 @@
 from tvm.relay.frontend.pytorch_utils import is_version_greater_than
 from tvm.contrib.download import download_testdata
 
+from tvm.relay.dataflow_pattern import wildcard, is_op
+from tvm.relay.op.contrib.register import register_pattern_table
+from tvm.relay.op.contrib.register import get_pattern_table
+
 
 def torch_version_check():
     from packaging import version
 
     return version.parse(torch.__version__) > version.parse("1.4.0")
 
 
+def make_qnn_add_pattern():
+    lhs = wildcard()
+    rhs = wildcard()
+    lhs_scale = wildcard()
+    lhs_zero_point = wildcard()
+    rhs_scale = wildcard()
+    rhs_zero_point = wildcard()
+    output_scale = wildcard()
+    output_zero_point = wildcard()
+    qadd = is_op("qnn.add")(
+        lhs,
+        rhs,
+        lhs_scale,
+        lhs_zero_point,
+        rhs_scale,
+        rhs_zero_point,
+        output_scale,
+        output_zero_point,
+    )
+    return qadd.optional(is_op("clip"))
+
+
+@register_pattern_table("test_table")
+def pattern_table():
+    return [
+        ("qnn_add", make_qnn_add_pattern()),
+    ]
+
+
 def get_tvm_runtime(script_module, input_name, ishape):
 
     input_shapes = [(input_name, ishape)]
     mod, params = relay.frontend.from_pytorch(script_module, input_shapes)
+    pattern_table = get_pattern_table("test_table")

Review comment:
       This is added to make sure `MergeComposite` pass at L83 doesn't error, it comes straight from the patch in https://github.com/apache/tvm/issues/7067.
   
   I'll ask the issue reporter to clean this up and make a standalone test for this issue, after we merge this PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539764667



##########
File path: src/relay/qnn/op/op_common.h
##########
@@ -171,6 +171,11 @@ static inline bool QnnBroadcastRel(const Array<Type>& types, int num_inputs, con
   ICHECK_EQ(types.size(), kNumQnnBinaryOpArgTypes);
 
   // Check the scale and zero point types
+  for (size_t i = 0; i < 8; ++i) {

Review comment:
       because this is for binary operator like `qnn.add`, it has scales and zps for both lhs an rhs.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#issuecomment-742146711


   @anijain2305 So after https://github.com/apache/tvm/pull/6704, it seems type inferencer can pass `IncompleteType` to QNN type rel functions, which by itself is not wrong. @mbrookhart applied the same fix to the dynamic op type relation functions to make type inference pass.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539764667



##########
File path: src/relay/qnn/op/op_common.h
##########
@@ -171,6 +171,11 @@ static inline bool QnnBroadcastRel(const Array<Type>& types, int num_inputs, con
   ICHECK_EQ(types.size(), kNumQnnBinaryOpArgTypes);
 
   // Check the scale and zero point types
+  for (size_t i = 0; i < 8; ++i) {

Review comment:
       because this is binary operator, it has scales and zps for both lhs an rhs.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jwfromm commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
jwfromm commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539753995



##########
File path: src/relay/qnn/op/op_common.h
##########
@@ -171,6 +171,11 @@ static inline bool QnnBroadcastRel(const Array<Type>& types, int num_inputs, con
   ICHECK_EQ(types.size(), kNumQnnBinaryOpArgTypes);
 
   // Check the scale and zero point types
+  for (size_t i = 0; i < 8; ++i) {

Review comment:
       Why does this one have so many scales and zero points? The others with 4 type checks make sense, but checking 9 types here is a clear outlier.

##########
File path: tests/python/frontend/pytorch/qnn_test.py
##########
@@ -32,17 +32,58 @@
 from tvm.relay.frontend.pytorch_utils import is_version_greater_than
 from tvm.contrib.download import download_testdata
 
+from tvm.relay.dataflow_pattern import wildcard, is_op
+from tvm.relay.op.contrib.register import register_pattern_table
+from tvm.relay.op.contrib.register import get_pattern_table
+
 
 def torch_version_check():
     from packaging import version
 
     return version.parse(torch.__version__) > version.parse("1.4.0")
 
 
+def make_qnn_add_pattern():
+    lhs = wildcard()
+    rhs = wildcard()
+    lhs_scale = wildcard()
+    lhs_zero_point = wildcard()
+    rhs_scale = wildcard()
+    rhs_zero_point = wildcard()
+    output_scale = wildcard()
+    output_zero_point = wildcard()
+    qadd = is_op("qnn.add")(
+        lhs,
+        rhs,
+        lhs_scale,
+        lhs_zero_point,
+        rhs_scale,
+        rhs_zero_point,
+        output_scale,
+        output_zero_point,
+    )
+    return qadd.optional(is_op("clip"))
+
+
+@register_pattern_table("test_table")
+def pattern_table():
+    return [
+        ("qnn_add", make_qnn_add_pattern()),
+    ]
+
+
 def get_tvm_runtime(script_module, input_name, ishape):
 
     input_shapes = [(input_name, ishape)]
     mod, params = relay.frontend.from_pytorch(script_module, input_shapes)
+    pattern_table = get_pattern_table("test_table")

Review comment:
       Can you add a comment for what this block is doing?

##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -263,6 +263,14 @@ bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
+  if (types[0].as<IncompleteTypeNode>()) {
+    return false;
+  }
+  for (size_t i = 3; i < 5; ++i) {

Review comment:
       While we're adding a bunch of type checks, can you add a comment indicating what each input represents, something like `// Expected types: data, scale, zero_point, ...`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539769256



##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -263,6 +263,14 @@ bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
+  if (types[0].as<IncompleteTypeNode>()) {

Review comment:
       @mbrookhart I thin this is not necessary, as this case is already covered by https://github.com/apache/tvm/blob/main/src/relay/qnn/op/requantize.cc#L260-L264 above




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539826881



##########
File path: src/relay/qnn/op/convolution.cc
##########
@@ -57,22 +59,27 @@ bool QnnConv2DRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
   ICHECK(param->out_dtype.bits() > 0) << "Output dtype bits should be greater than 0.";
 
   // Check the types of scale and zero points.
+  for (size_t i = 2; i < 5; ++i) {

Review comment:
       I think @mbrookhart added check for those types checked by `ICHECK(IsScalarType(...))` below. There is no check for weight scale being scalar because weight scale can be a vector for per channel case.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#issuecomment-742364417


   Thanks @mbrookhart @jwfromm 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi merged pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi merged pull request #7074:
URL: https://github.com/apache/tvm/pull/7074


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539766653



##########
File path: tests/python/frontend/pytorch/qnn_test.py
##########
@@ -32,17 +32,58 @@
 from tvm.relay.frontend.pytorch_utils import is_version_greater_than
 from tvm.contrib.download import download_testdata
 
+from tvm.relay.dataflow_pattern import wildcard, is_op
+from tvm.relay.op.contrib.register import register_pattern_table
+from tvm.relay.op.contrib.register import get_pattern_table
+
 
 def torch_version_check():
     from packaging import version
 
     return version.parse(torch.__version__) > version.parse("1.4.0")
 
 
+def make_qnn_add_pattern():
+    lhs = wildcard()
+    rhs = wildcard()
+    lhs_scale = wildcard()
+    lhs_zero_point = wildcard()
+    rhs_scale = wildcard()
+    rhs_zero_point = wildcard()
+    output_scale = wildcard()
+    output_zero_point = wildcard()
+    qadd = is_op("qnn.add")(
+        lhs,
+        rhs,
+        lhs_scale,
+        lhs_zero_point,
+        rhs_scale,
+        rhs_zero_point,
+        output_scale,
+        output_zero_point,
+    )
+    return qadd.optional(is_op("clip"))
+
+
+@register_pattern_table("test_table")
+def pattern_table():
+    return [
+        ("qnn_add", make_qnn_add_pattern()),
+    ]
+
+
 def get_tvm_runtime(script_module, input_name, ishape):
 
     input_shapes = [(input_name, ishape)]
     mod, params = relay.frontend.from_pytorch(script_module, input_shapes)
+    pattern_table = get_pattern_table("test_table")

Review comment:
       This is added to make sure `MergeComposite` pass at L83 doesn't error, it comes straight from the patch in https://github.com/apache/tvm/issues/7067.
   
   I'll ask the issue reporter to clean this up and make a standalone test for this issue. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539826764



##########
File path: src/relay/qnn/op/convolution.cc
##########
@@ -57,22 +59,27 @@ bool QnnConv2DRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
   ICHECK(param->out_dtype.bits() > 0) << "Output dtype bits should be greater than 0.";
 
   // Check the types of scale and zero points.
+  for (size_t i = 2; i < 5; ++i) {

Review comment:
       Weight scale is Assigned further down:
   ` AssignType(types[5], DataType::Float(32), weight->shape[axis], reporter);  // weight_scale`
   
   It's one of those times where I assume it should be an input, but it's getting set here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539826881



##########
File path: src/relay/qnn/op/convolution.cc
##########
@@ -57,22 +59,27 @@ bool QnnConv2DRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
   ICHECK(param->out_dtype.bits() > 0) << "Output dtype bits should be greater than 0.";
 
   // Check the types of scale and zero points.
+  for (size_t i = 2; i < 5; ++i) {

Review comment:
       I think @mbrookhart added check for those types checked by `ICHECK(IsScalarType(...))` below. There is no check for weight scale being scalar because weight scale can be a vector for per channel case.
   
   But yes, for consistency we can also add incomplete ness check for weight scale type.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539827184



##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -256,13 +256,20 @@ Expr RequantizeQnnCanonicalize(const Attrs& attrs, const Array<Expr>& new_args,
  */
 bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
                    const TypeReporter& reporter) {
+  // Expected Types: data, input_scale, input_zero_point, output_scale, output_zero_point, output
   ICHECK_EQ(types.size(), 6);
   const auto* data = types[0].as<TensorTypeNode>();
 
   if (data == nullptr) {
     return false;
   }
 
+  // Check the scale and zero point types
+  for (size_t i = 3; i < 5; ++i) {

Review comment:
       A few lines down you'll see assignments for input scale and input zero point, but checks on output_scale and output_zero_point




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jwfromm commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
jwfromm commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539823312



##########
File path: src/relay/qnn/op/convolution.cc
##########
@@ -57,22 +59,27 @@ bool QnnConv2DRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
   ICHECK(param->out_dtype.bits() > 0) << "Output dtype bits should be greater than 0.";
 
   // Check the types of scale and zero points.
+  for (size_t i = 2; i < 5; ++i) {

Review comment:
       Am I counting wrong or is this skipping `weight_scale`?

##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -256,13 +256,20 @@ Expr RequantizeQnnCanonicalize(const Attrs& attrs, const Array<Expr>& new_args,
  */
 bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
                    const TypeReporter& reporter) {
+  // Expected Types: data, input_scale, input_zero_point, output_scale, output_zero_point, output
   ICHECK_EQ(types.size(), 6);
   const auto* data = types[0].as<TensorTypeNode>();
 
   if (data == nullptr) {
     return false;
   }
 
+  // Check the scale and zero point types
+  for (size_t i = 3; i < 5; ++i) {

Review comment:
       These values also dont seem to line up with expected types.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#issuecomment-742211057


   @jwfromm These functions are a little odd, they often check types for some of the scales/zero points and then run assignments on others I would expect to be inputs. I assume there is a reason for this, perhaps @anijain2305 knows? Anyway, to make it work, I only added the return false on those input types we actually end up checking. 
   
   I just pushed a bunch of comments for what we expect in the types vector, I hope that helps.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539814026



##########
File path: src/relay/qnn/op/op_common.h
##########
@@ -171,6 +171,11 @@ static inline bool QnnBroadcastRel(const Array<Type>& types, int num_inputs, con
   ICHECK_EQ(types.size(), kNumQnnBinaryOpArgTypes);
 
   // Check the scale and zero point types
+  for (size_t i = 0; i < 8; ++i) {

Review comment:
       Yes, it has tensor, scale, zero point for lhs, rhs, and output, added more comments

##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -263,6 +263,14 @@ bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
+  if (types[0].as<IncompleteTypeNode>()) {

Review comment:
       Good catch, thanks!

##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -263,6 +263,14 @@ bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
+  if (types[0].as<IncompleteTypeNode>()) {
+    return false;
+  }
+  for (size_t i = 3; i < 5; ++i) {

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539769256



##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -263,6 +263,14 @@ bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
     return false;
   }
 
+  if (types[0].as<IncompleteTypeNode>()) {

Review comment:
       @mbrookhart I think this is not necessary, as this case is already covered by https://github.com/apache/tvm/blob/main/src/relay/qnn/op/requantize.cc#L260-L264 above




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] mbrookhart commented on a change in pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
mbrookhart commented on a change in pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#discussion_r539827411



##########
File path: src/relay/qnn/op/requantize.cc
##########
@@ -256,13 +256,20 @@ Expr RequantizeQnnCanonicalize(const Attrs& attrs, const Array<Expr>& new_args,
  */
 bool RequantizeRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
                    const TypeReporter& reporter) {
+  // Expected Types: data, input_scale, input_zero_point, output_scale, output_zero_point, output
   ICHECK_EQ(types.size(), 6);
   const auto* data = types[0].as<TensorTypeNode>();
 
   if (data == nullptr) {
     return false;
   }
 
+  // Check the scale and zero point types
+  for (size_t i = 3; i < 5; ++i) {

Review comment:
       So this is pre-checking output_scale and output_zero_point




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#issuecomment-742146711


   @anijain2305 So after https://github.com/apache/tvm/pull/6704, it seems type inferencer can pass `IncompleteType` to QNN type rel functions, which by itself is not wrong. Previously, @mbrookhart applied the same fix to the dynamic op type relation functions to make type inference pass.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jwfromm commented on pull request #7074: Fix QNN type inference

Posted by GitBox <gi...@apache.org>.
jwfromm commented on pull request #7074:
URL: https://github.com/apache/tvm/pull/7074#issuecomment-742155528


   I'm a little confused what exactly is being type checked. The comments say its scale and zero points but the number of checked types in the loop don't match up. A little better documentation would go a long way.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org