You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/04/07 14:07:47 UTC

[GitHub] [tvm] ekalda commented on a diff in pull request #10915: [TFLite] Add support to int16 data type in TFLite frontend

ekalda commented on code in PR #10915:
URL: https://github.com/apache/tvm/pull/10915#discussion_r845174881


##########
src/relay/qnn/op/convolution.cc:
##########
@@ -50,12 +50,15 @@ bool QnnConv2DRel(const Array<Type>& types, int num_inputs, const Attrs& attrs,
   if (data == nullptr || weight == nullptr) return false;
   const auto* param = attrs.as<Conv2DAttrs>();
   ICHECK(param != nullptr) << "Conv2DAttrs cannot be nullptr.";
-  ICHECK(data->dtype == DataType::Int(8) || data->dtype == DataType::UInt(8))
-      << "Expected qnn conv2d type(int8, uint8) for input but was " << data->dtype;
-  ICHECK(weight->dtype == DataType::Int(8) || weight->dtype == DataType::UInt(8))
-      << "Expected qnn conv2d type(int8, uint8) for weight but was " << weight->dtype;
-  ICHECK(param->out_dtype == DataType::Int(16) || param->out_dtype == DataType::Int(32))
-      << "Expected qnn conv2d type(int32, int16) for output but was " << param->out_dtype;
+  ICHECK(data->dtype == DataType::Int(8) || data->dtype == DataType::UInt(8) ||
+         data->dtype == DataType::Int(16))
+      << "Expected qnn conv2d type(int8, uint8, int16) for input but was " << data->dtype;

Review Comment:
   There are several places in the qnn conv2d canonicalisation where the input data or weights are casted into a larger format (e.g. https://github.com/apache/tvm/blob/main/src/relay/qnn/op/convolution.cc#L199-L201), I think that's done to avoid overflow/underflow. Since it has always assumed the original data is (u)int8, don't we need to think through the numeric limits and adjust them accordingly for the int16 case? I see that most of the casts are into int32 though and since the tests are passing, maybe 32 bits is wide enough for int16 as well...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org