You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by GitBox <gi...@apache.org> on 2019/03/05 00:17:59 UTC

[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #14094: Enhance gpu quantization

anirudh2290 commented on a change in pull request #14094: Enhance gpu quantization
URL: https://github.com/apache/incubator-mxnet/pull/14094#discussion_r262299984
 
 

 ##########
 File path: src/operator/quantization/quantized_conv.cu
 ##########
 @@ -110,6 +110,9 @@ class QuantizedCuDNNConvOp {
     const TShape& fshape = filter.shape_;
     const TShape& oshape = out.shape_;
 
+    CHECK_EQ(data.type_flag_, mshadow::kInt8)
+      << "currently, uint8 quantization is only supported by CPU, "
+         "please switch to the context of CPU or int8 data type for GPU.";
 
 Review comment:
   can we add inside quantize-inl.h, this way it will return an error message even for networks without this op.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services