You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2020/01/10 07:19:31 UTC

[GitHub] [incubator-mxnet] rongzha1 opened a new pull request #17265: Add bfloat16 floating-point format support based on AMP

rongzha1 opened a new pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265
 
 
   ## Description ##
   Bfloat16 is wildly used in Deep Learning especially on training to get a better performance.
   
   This PR is to add bf16 support based on mxnet AMP(Automatic Mixed Precision) module .
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [done ] Changes are complete (i.e. I finished coding on this PR)
   - [done ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL)
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments are documented. 
   - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable
   - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ done] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be made.
   - Interesting edge cases to note here
   
   This PR has passed unitest and preci test in local machine.
   Unit tests are added for this PR.
   
   @ZhennanQin @ElaineBao @xinyu-intel @TaoLv @PatricZhao 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365559408
 
 

 ##########
 File path: python/mxnet/ndarray/ndarray.py
 ##########
 @@ -83,6 +84,7 @@
     5: np.int8,
     6: np.int64,
     7: np.bool_,
+    12: np.dtype([('bfloat16', np.uint16)]),
 
 Review comment:
   why 12?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r366948096
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/base.h
 ##########
 @@ -988,6 +1034,7 @@ struct minimum {
 };
 }  // namespace red
 
+#ifndef __NVCC__
 
 Review comment:
   I don't like it - can we make a similar thing as in the case of fp16 for CPUs that did not support F16C instructions (i.e. code that runs but may be slower than the code for hardware natively supporting bfloat16)?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
pengzhao-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r366199706
 
 

 ##########
 File path: example/quantization/imagenet_inference.py
 ##########
 @@ -99,7 +100,34 @@ def score(sym, arg_params, aux_params, data, devs, label_name, max_num_examples,
             logger.info(m.get())
 
 
-def benchmark_score(symbol_file, ctx, batch_size, num_batches, data_layer_type, logger=None):
+def low_precison_convert(model_name, low_precision, sym, arg_params, aux_params, excluded_sym_names=[]):
+    if low_precision == 'bfloat16':
+        if model_name.find('imagenet1k-resnet-152') != -1:
+            excluded_sym_names += ['conv0']
+        elif model_name.find('imagenet1k-inception-bn') != -1:
+            excluded_sym_names += ['conv_1']
+        elif model_name.find('resnet') != -1 and model_name.find('v1') != -1:
+            excluded_sym_names += ['resnetv10_conv0_fwd']
+        elif model_name.find('resnet') != -1 and model_name.find('v2') != -1:
+            excluded_sym_names += ['resnetv20_conv0_fwd']
+        elif model_name.find('vgg') != -1:
+            excluded_sym_names += ['vgg0_conv0_fwd']
+        elif model_name.find('squeezenet1') != -1:
+            excluded_sym_names += ['squeezenet0_conv0_fwd']
+        elif model_name.find('mobilenet') != -1 and model_name.find('v2') == -1:
+            excluded_sym_names += ['mobilenet0_conv0_fwd']
+        elif model_name.find('mobilenet') != -1 and model_name.find('v2') != -1:
+            excluded_sym_names += ['mobilenetv20_conv0_fwd']
+        elif model_name.find('inceptionv3') != -1:
+            excluded_sym_names += ['inception30_conv0_fwd']
 
 Review comment:
   Please add a comment for this temp performance solution and will convert all conv layers later.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] xinyu-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
xinyu-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r369456287
 
 

 ##########
 File path: src/operator/nn/fully_connected.cc
 ##########
 @@ -237,7 +238,7 @@ static bool BackwardFCStorageType(const nnvm::NodeAttrs& attrs,
   bool dispatched = false;
   if (!dispatched && common::ContainsOnlyStorage(*in_attrs, mxnet::kDefaultStorage)) {
     dispatched = storage_type_assign(out_attrs, mxnet::kDefaultStorage,
-                                     dispatch_mode, DispatchMode::kFCompute);
+                                     dispatch_mode, DispatchMode::kFComputeEx);
 
 Review comment:
   we may need to enable dnnl fc bwd in another PR since there is an known issue.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365559345
 
 

 ##########
 File path: include/mxnet/ndarray.h
 ##########
 @@ -770,6 +770,12 @@ class NDArray {
    */
   NDArray Reorder2Default() const;
 
+    /*
+   * This creates a new NDArray using f32 with the reordered data.
+   * It doesn't affect the data of the original NDArray.
+   */
+  NDArray Reorder2DefaultFp32() const;
 
 Review comment:
   Adding dtype specific interface looks very ad-hoc

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
pengzhao-intel merged pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365391125
 
 

 ##########
 File path: python/mxnet/gluon/parameter.py
 ##########
 @@ -289,11 +289,18 @@ def _load_init(self, data, ctx, cast_dtype=False, dtype_source='current'):
                 elif dtype_source == 'saved':
                     self.dtype = data.dtype
             else:
-                assert np.dtype(self.dtype).type == data.dtype, \
-                "Failed loading Parameter '%s' from saved params: " \
-                "dtype incompatible expected %s vs saved %s. " \
-                "Set cast_dtype=True to cast the dtype of saved params."%(
-                    self.name, str(self.dtype), str(data.dtype))
+                if data.dtype == np.dtype([('bfloat16', np.uint16)]):
+                    assert np.dtype(self.dtype) == data.dtype, \
+                    "Failed loading Parameter '%s' from saved params: " \
+                    "dtype incompatible expected %s vs saved %s. " \
+                    "Set cast_dtype=True to cast the dtype of saved params."%(
+                        self.name, str(self.dtype), str(data.dtype))
+                else:
+                    assert np.dtype(self.dtype).type == data.dtype, \
+                    "Failed loading Parameter '%s' from saved params: " \
+                    "dtype incompatible expected %s vs saved %s. " \
+                    "Set cast_dtype=True to cast the dtype of saved params."%(
+                        self.name, str(self.dtype), str(data.dtype))
 
 Review comment:
   Aren't those 2 codepaths the same?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r366153708
 
 

 ##########
 File path: src/engine/naive_engine.cc
 ##########
 @@ -55,7 +55,7 @@ class NaiveEngine final : public Engine {
     std::vector<VarHandle> const_vars;
     std::vector<VarHandle> mutable_vars;
     FnProperty prop;
-    const char* opr_name;
+    std::string opr_name;
 
 Review comment:
   This is a bugfix for naive engine. If my remember is correct, MXNet already had this fixed with different approach. So we can drop this change from this PR.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r366115432
 
 

 ##########
 File path: include/mxnet/ndarray.h
 ##########
 @@ -770,6 +770,12 @@ class NDArray {
    */
   NDArray Reorder2Default() const;
 
+    /*
+   * This creates a new NDArray using f32 with the reordered data.
+   * It doesn't affect the data of the original NDArray.
+   */
+  NDArray Reorder2DefaultFp32() const;
 
 Review comment:
   OK, will change it to Reorder2DefaultFloatFormat()

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ElaineBao commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ElaineBao commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365696427
 
 

 ##########
 File path: python/mxnet/ndarray/ndarray.py
 ##########
 @@ -83,6 +84,7 @@
     5: np.int8,
     6: np.int64,
     7: np.bool_,
+    12: np.dtype([('bfloat16', np.uint16)]),
 
 Review comment:
   This is to align the `TypeFlag` defined in mshadow

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] TaoLv commented on issue #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
TaoLv commented on issue #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-586329605
 
 
   Hi @leezu, @larroy, this PR can pass the builds for ARM but always hit the time out of the test. 
   http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fedge/detail/PR-17265/29/pipeline
   
   We don't have any environment to reproduce. Could you please take a look or have any suggestion for further debugging?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r379713720
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##########
 @@ -0,0 +1,167 @@
+/*!
 
 Review comment:
   done  thanks

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r367265169
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/base.h
 ##########
 @@ -988,6 +1034,7 @@ struct minimum {
 };
 }  // namespace red
 
+#ifndef __NVCC__
 
 Review comment:
   We don't have enough background / knowledge to enable Bfloat16 on GPU side. So probably we can't make the change you proposed. Alternately, any code refactoring on GPU side is welcome. you may change this as you want in following PR.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365519649
 
 

 ##########
 File path: python/mxnet/contrib/amp/amp.py
 ##########
 @@ -43,14 +44,17 @@
 from ... import optimizer as opt
 from .loss_scaler import LossScaler
 
+bfloat16 = np.dtype([('bfloat16', np.uint16)])
 
 Review comment:
   This is a good topic, and I want to have a discussion for this. 
   Currently, MXNet doesn't have its own type system. It's simply using Numpy.dtype. Numpy doesn't natively support bfloat16, so we define bfloat16 as a numpy customized type.
   Pros: compatible with current design, isinstance(bfloat16, np.dtype) could return True.
   cons: bfloat16.name doesn't work, have to use bfloat16.names[0] instead.
   Another solution is, creating MXNet's own data type system, just like pytorch and tf. This is a big API change, so we wish this can be done when upgrading to MXNet 2.0.
   
   Currently, we prefer this approach to enable bfloat16 in MXNet 1.x, and refactor it in MXNet 2.0.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] pengzhao-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
pengzhao-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r366151188
 
 

 ##########
 File path: .gitmodules
 ##########
 @@ -6,7 +6,7 @@
 	url = https://github.com/dmlc/ps-lite
 [submodule "3rdparty/dlpack"]
 	path = 3rdparty/dlpack
-	url = https://github.com/dmlc/dlpack
+	url = https://github.com/ElaineBao/dlpack.git
 
 Review comment:
   Definitely :) We're working on PR the related code in dlpack https://github.com/dmlc/dlpack/issues/45

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365368147
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/base.h
 ##########
 @@ -312,6 +338,11 @@ enum TypeFlag {
   kInt8  = 5,
   kInt64 = 6,
   kBool = 7,
+  kInt16 = 8,
+  kUint16 = 9,
+  kUint32 = 10,
+  kUint64 = 11,
 
 Review comment:
   Why adding those additional types here? No operator supports them anyway, right?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365376274
 
 

 ##########
 File path: python/mxnet/contrib/amp/amp.py
 ##########
 @@ -43,14 +44,17 @@
 from ... import optimizer as opt
 from .loss_scaler import LossScaler
 
+bfloat16 = np.dtype([('bfloat16', np.uint16)])
 
 Review comment:
   Can we have this dtype accessible (as `mx.bfloat16` or something similar)?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r369913307
 
 

 ##########
 File path: .gitmodules
 ##########
 @@ -6,7 +6,7 @@
 	url = https://github.com/dmlc/ps-lite
 [submodule "3rdparty/dlpack"]
 	path = 3rdparty/dlpack
-	url = https://github.com/dmlc/dlpack
+	url = https://github.com/dmlc/dlpack.git
 
 Review comment:
   OK

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ElaineBao commented on issue #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ElaineBao commented on issue #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-574050844
 
 
   > For bfloat16 training, which loss scalar is recommended? Do we also need to perform NaN checks?
   
   Bfloat16 has the same dynamic range as float32, since they have the same exponent bits. So it can represent gradients directly, it doesn't require loss scaling like fp16.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365518812
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/base.h
 ##########
 @@ -312,6 +338,11 @@ enum TypeFlag {
   kInt8  = 5,
   kInt64 = 6,
   kBool = 7,
+  kInt16 = 8,
+  kUint16 = 9,
+  kUint32 = 10,
+  kUint64 = 11,
 
 Review comment:
   This is to align the definition with DLPack. Otherwise we have to preserve those numbers. Even through we don't use them currently, it's no harm to add them.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r367265169
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/base.h
 ##########
 @@ -988,6 +1034,7 @@ struct minimum {
 };
 }  // namespace red
 
+#ifndef __NVCC__
 
 Review comment:
   We don't have enough background / knowledge to enable Bfloat16 on GPU side. So probably we can't make the change you proposed. Alternately, any code refactoring on GPU side is welcome. you may change this in following PR.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] TaoLv commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
TaoLv commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r379484516
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##########
 @@ -0,0 +1,167 @@
+/*!
 
 Review comment:
   @szha Do we need Apache license header for this new file?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] zhreshold commented on issue #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
zhreshold commented on issue #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-586543482
 
 
   @larroy Do you have idea how to display more logs for the edge tests? It consistently fail at this stage.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365370176
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##########
 @@ -0,0 +1,167 @@
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file bfloat.h
+ * \brief definition of bfloat type.
+ *
+ * \author Zhennan Qin
+ */
+#ifndef MSHADOW_BFLOAT_H_
+#define MSHADOW_BFLOAT_H_
+#include "./base.h"
+
+/*! \brief namespace for mshadow */
+namespace mshadow {
+/* \brief name space for host/device portable bfloats */
+namespace bfloat {
+
+#define MSHADOW_BF16_OPERATOR_TYPE(RTYPE, ITYPE, OP)                      \
+  MSHADOW_XINLINE RTYPE operator OP (ITYPE a, bf16_t b) {                 \
+    return RTYPE(a OP float(b));  /* NOLINT(*) */                         \
+  }                                                                       \
+  MSHADOW_XINLINE RTYPE operator OP (bf16_t a, ITYPE b) {                 \
+    return RTYPE(float(a) OP b);  /* NOLINT(*) */                         \
+  }
+
+#define MSHADOW_BF16_OPERATOR(RTYPE, OP)                                  \
+  MSHADOW_XINLINE RTYPE operator OP (bf16_t a, bf16_t b) {                \
+    return RTYPE(static_cast<float>(a) OP float(b));  /* NOLINT(*) */     \
+  }                                                                       \
+  MSHADOW_BF16_OPERATOR_TYPE(float, float, OP)                            \
+  MSHADOW_BF16_OPERATOR_TYPE(double, double, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, int8_t, OP)                           \
+  MSHADOW_BF16_OPERATOR_TYPE(float, uint8_t, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, int32_t, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, uint32_t, OP)                         \
+  MSHADOW_BF16_OPERATOR_TYPE(float, int64_t, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, uint64_t, OP)
 
 Review comment:
   Returning float or double, while understandable, is a different behavior to the one currently done for half_t type. Could we discuss this and make them consistent?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365559445
 
 

 ##########
 File path: src/engine/naive_engine.cc
 ##########
 @@ -55,7 +55,7 @@ class NaiveEngine final : public Engine {
     std::vector<VarHandle> const_vars;
     std::vector<VarHandle> mutable_vars;
     FnProperty prop;
-    const char* opr_name;
+    std::string opr_name;
 
 Review comment:
   Was this an issue blocking this feature? just curious 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365518893
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##########
 @@ -0,0 +1,167 @@
+/*!
+ *  Copyright (c) 2019 by Contributors
+ * \file bfloat.h
+ * \brief definition of bfloat type.
+ *
+ * \author Zhennan Qin
+ */
+#ifndef MSHADOW_BFLOAT_H_
+#define MSHADOW_BFLOAT_H_
+#include "./base.h"
+
+/*! \brief namespace for mshadow */
+namespace mshadow {
+/* \brief name space for host/device portable bfloats */
+namespace bfloat {
+
+#define MSHADOW_BF16_OPERATOR_TYPE(RTYPE, ITYPE, OP)                      \
+  MSHADOW_XINLINE RTYPE operator OP (ITYPE a, bf16_t b) {                 \
+    return RTYPE(a OP float(b));  /* NOLINT(*) */                         \
+  }                                                                       \
+  MSHADOW_XINLINE RTYPE operator OP (bf16_t a, ITYPE b) {                 \
+    return RTYPE(float(a) OP b);  /* NOLINT(*) */                         \
+  }
+
+#define MSHADOW_BF16_OPERATOR(RTYPE, OP)                                  \
+  MSHADOW_XINLINE RTYPE operator OP (bf16_t a, bf16_t b) {                \
+    return RTYPE(static_cast<float>(a) OP float(b));  /* NOLINT(*) */     \
+  }                                                                       \
+  MSHADOW_BF16_OPERATOR_TYPE(float, float, OP)                            \
+  MSHADOW_BF16_OPERATOR_TYPE(double, double, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, int8_t, OP)                           \
+  MSHADOW_BF16_OPERATOR_TYPE(float, uint8_t, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, int32_t, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, uint32_t, OP)                         \
+  MSHADOW_BF16_OPERATOR_TYPE(float, int64_t, OP)                          \
+  MSHADOW_BF16_OPERATOR_TYPE(float, uint64_t, OP)
 
 Review comment:
   Sure. Any suggestion here?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
eric-haibin-lin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r366188211
 
 

 ##########
 File path: src/engine/naive_engine.cc
 ##########
 @@ -55,7 +55,7 @@ class NaiveEngine final : public Engine {
     std::vector<VarHandle> const_vars;
     std::vector<VarHandle> mutable_vars;
     FnProperty prop;
-    const char* opr_name;
+    std::string opr_name;
 
 Review comment:
   I don't think it's merged yet. I'm ok with the change here.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365367203
 
 

 ##########
 File path: .gitmodules
 ##########
 @@ -6,7 +6,7 @@
 	url = https://github.com/dmlc/ps-lite
 [submodule "3rdparty/dlpack"]
 	path = 3rdparty/dlpack
-	url = https://github.com/dmlc/dlpack
+	url = https://github.com/ElaineBao/dlpack.git
 
 Review comment:
   Will need to be changed once the changes are merged to upstream dlpack. Leaving this comment as a reminder ;-).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r366978831
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/base.h
 ##########
 @@ -988,6 +1034,7 @@ struct minimum {
 };
 }  // namespace red
 
+#ifndef __NVCC__
 
 Review comment:
   You can implement `atomicAdd` (which seems to be the problem you are facing)with `atomicCAS` like this: https://github.com/apache/incubator-mxnet/blob/master/src/common/cuda_utils.h#L702-L721

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ZhennanQin commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365518924
 
 

 ##########
 File path: example/quantization/imagenet_inference.py
 ##########
 @@ -99,7 +100,34 @@ def score(sym, arg_params, aux_params, data, devs, label_name, max_num_examples,
             logger.info(m.get())
 
 
-def benchmark_score(symbol_file, ctx, batch_size, num_batches, data_layer_type, logger=None):
+def low_precison_convert(model_name, low_precision, sym, arg_params, aux_params, excluded_sym_names=[]):
+    if low_precision == 'bfloat16':
+        if model_name.find('imagenet1k-resnet-152') != -1:
+            excluded_sym_names += ['conv0']
+        elif model_name.find('imagenet1k-inception-bn') != -1:
+            excluded_sym_names += ['conv_1']
+        elif model_name.find('resnet') != -1 and model_name.find('v1') != -1:
+            excluded_sym_names += ['resnetv10_conv0_fwd']
+        elif model_name.find('resnet') != -1 and model_name.find('v2') != -1:
+            excluded_sym_names += ['resnetv20_conv0_fwd']
+        elif model_name.find('vgg') != -1:
+            excluded_sym_names += ['vgg0_conv0_fwd']
+        elif model_name.find('squeezenet1') != -1:
+            excluded_sym_names += ['squeezenet0_conv0_fwd']
+        elif model_name.find('mobilenet') != -1 and model_name.find('v2') == -1:
+            excluded_sym_names += ['mobilenet0_conv0_fwd']
+        elif model_name.find('mobilenet') != -1 and model_name.find('v2') != -1:
+            excluded_sym_names += ['mobilenetv20_conv0_fwd']
+        elif model_name.find('inceptionv3') != -1:
+            excluded_sym_names += ['inception30_conv0_fwd']
 
 Review comment:
   Not for accuracy, but for performance purpose. This could be removed once more bfloat16 hardware added.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r369913346
 
 

 ##########
 File path: src/operator/nn/fully_connected.cc
 ##########
 @@ -237,7 +238,7 @@ static bool BackwardFCStorageType(const nnvm::NodeAttrs& attrs,
   bool dispatched = false;
   if (!dispatched && common::ContainsOnlyStorage(*in_attrs, mxnet::kDefaultStorage)) {
     dispatched = storage_type_assign(out_attrs, mxnet::kDefaultStorage,
-                                     dispatch_mode, DispatchMode::kFCompute);
+                                     dispatch_mode, DispatchMode::kFComputeEx);
 
 Review comment:
   Thanks for reminder

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
ptrendx commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r365371650
 
 

 ##########
 File path: example/quantization/imagenet_inference.py
 ##########
 @@ -99,7 +100,34 @@ def score(sym, arg_params, aux_params, data, devs, label_name, max_num_examples,
             logger.info(m.get())
 
 
-def benchmark_score(symbol_file, ctx, batch_size, num_batches, data_layer_type, logger=None):
+def low_precison_convert(model_name, low_precision, sym, arg_params, aux_params, excluded_sym_names=[]):
+    if low_precision == 'bfloat16':
+        if model_name.find('imagenet1k-resnet-152') != -1:
+            excluded_sym_names += ['conv0']
+        elif model_name.find('imagenet1k-inception-bn') != -1:
+            excluded_sym_names += ['conv_1']
+        elif model_name.find('resnet') != -1 and model_name.find('v1') != -1:
+            excluded_sym_names += ['resnetv10_conv0_fwd']
+        elif model_name.find('resnet') != -1 and model_name.find('v2') != -1:
+            excluded_sym_names += ['resnetv20_conv0_fwd']
+        elif model_name.find('vgg') != -1:
+            excluded_sym_names += ['vgg0_conv0_fwd']
+        elif model_name.find('squeezenet1') != -1:
+            excluded_sym_names += ['squeezenet0_conv0_fwd']
+        elif model_name.find('mobilenet') != -1 and model_name.find('v2') == -1:
+            excluded_sym_names += ['mobilenet0_conv0_fwd']
+        elif model_name.find('mobilenet') != -1 and model_name.find('v2') != -1:
+            excluded_sym_names += ['mobilenetv20_conv0_fwd']
+        elif model_name.find('inceptionv3') != -1:
+            excluded_sym_names += ['inception30_conv0_fwd']
 
 Review comment:
   Why? Is there an accuracy issue without those exclusions?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] xinyu-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
xinyu-intel commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r369451812
 
 

 ##########
 File path: .gitmodules
 ##########
 @@ -6,7 +6,7 @@
 	url = https://github.com/dmlc/ps-lite
 [submodule "3rdparty/dlpack"]
 	path = 3rdparty/dlpack
-	url = https://github.com/dmlc/dlpack
+	url = https://github.com/dmlc/dlpack.git
 
 Review comment:
   keep same.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
pengzhao-intel commented on issue #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-583694595
 
 
   @ptrendx thanks for your review.  Feel free to let me know if you have other concerns, we are going to merge PR recently. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
pengzhao-intel commented on issue #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-586663188
 
 
   I am merging now. If there're any other comments, we can resolve by new PR.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] szha commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
szha commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r379517986
 
 

 ##########
 File path: 3rdparty/mshadow/mshadow/bfloat.h
 ##########
 @@ -0,0 +1,167 @@
+/*!
 
 Review comment:
   yes

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
eric-haibin-lin commented on issue #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#issuecomment-574898654
 
 
   @ElaineBao thanks for the explanation

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-mxnet] rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP

Posted by GitBox <gi...@apache.org>.
rongzha1 commented on a change in pull request #17265: Add bfloat16 floating-point format support based on AMP 
URL: https://github.com/apache/incubator-mxnet/pull/17265#discussion_r368886511
 
 

 ##########
 File path: python/mxnet/gluon/parameter.py
 ##########
 @@ -289,11 +289,18 @@ def _load_init(self, data, ctx, cast_dtype=False, dtype_source='current'):
                 elif dtype_source == 'saved':
                     self.dtype = data.dtype
             else:
-                assert np.dtype(self.dtype).type == data.dtype, \
-                "Failed loading Parameter '%s' from saved params: " \
-                "dtype incompatible expected %s vs saved %s. " \
-                "Set cast_dtype=True to cast the dtype of saved params."%(
-                    self.name, str(self.dtype), str(data.dtype))
+                if data.dtype == np.dtype([('bfloat16', np.uint16)]):
+                    assert np.dtype(self.dtype) == data.dtype, \
+                    "Failed loading Parameter '%s' from saved params: " \
+                    "dtype incompatible expected %s vs saved %s. " \
+                    "Set cast_dtype=True to cast the dtype of saved params."%(
+                        self.name, str(self.dtype), str(data.dtype))
+                else:
+                    assert np.dtype(self.dtype).type == data.dtype, \
+                    "Failed loading Parameter '%s' from saved params: " \
+                    "dtype incompatible expected %s vs saved %s. " \
+                    "Set cast_dtype=True to cast the dtype of saved params."%(
+                        self.name, str(self.dtype), str(data.dtype))
 
 Review comment:
    np.dtype(self.dtype) is different from np.dtype(self.dtype).type
   https://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.type.html
   https://docs.scipy.org/doc/numpy/reference/generated/numpy.dtype.html#numpy.dtype

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services