You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/01/12 03:32:58 UTC

[GitHub] [tvm] aaltonenzhang opened a new issue #7258: tvm doesn't support mix-precision inputs for qnn conv2d

aaltonenzhang opened a new issue #7258:
URL: https://github.com/apache/tvm/issues/7258


   I have a tflite model with float32 feature map and int8 weights, when I look into convert_conv() in tflite frontend, I found that _qnn.op.conv2d is constructed only if feature map is int8(uint8) typed. So, I tried to modify code like:
   
   >         
           if (input_tensor.qnn_params or weight_tensor.qnn_params):
               qnn_conv2d_params = dict(params)
               qnn_conv2d_params["input_zero_point"] = input_tensor.qnn_params["zero_point"] if input_tensor.qnn_params else relay.const(0, "int32")
               qnn_conv2d_params["kernel_zero_point"] = weight_tensor.qnn_params["zero_point"] if weight_tensor.qnn_params else relay.const(0, "int32")
               qnn_conv2d_params["out_dtype"] = "int32" if (input_tensor.qnn_params and weight_tensor.qnn_params) else "float32"
               qnn_conv2d_params["input_scale"] = input_tensor.qnn_params["scale"] if input_tensor.qnn_params else relay.const(1, "float32")
               qnn_conv2d_params["kernel_scale"] = weight_tensor.qnn_params["scale"] if weight_tensor.qnn_params else relay.const(1, "float32")
               out = _qnn.op.conv2d(in_expr, weight_expr, **qnn_conv2d_params)
           else:
               out = _op.nn.conv2d(in_expr, weight_expr, **params)
   
   And I found that another check in QnnConv2DRel() has failed: "Expected qnn conv2d type(int8, uint8) for input but was float32".
   Is it ok to simply modify code like I mentioned above and is it ok to ignore the ICHECK()? I don't know are there any other inner constraints for the mix-precision issue, and I hope somebody could support this feature officially. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on issue #7258: tvm doesn't support mix-precision inputs for qnn conv2d

Posted by GitBox <gi...@apache.org>.
tqchen commented on issue #7258:
URL: https://github.com/apache/tvm/issues/7258#issuecomment-758676410


   This seems to be a discussion for enhancement, would be great to open a thread on https://discuss.tvm.apache.org/


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] aaltonenzhang commented on issue #7258: tvm doesn't support mix-precision inputs for qnn conv2d

Posted by GitBox <gi...@apache.org>.
aaltonenzhang commented on issue #7258:
URL: https://github.com/apache/tvm/issues/7258#issuecomment-759146686


   My account for discussion is on hold, I won’t be able to reply or create topics until a staff member review the status. Could you please give a help? Thanks.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen closed issue #7258: tvm doesn't support mix-precision inputs for qnn conv2d

Posted by GitBox <gi...@apache.org>.
tqchen closed issue #7258:
URL: https://github.com/apache/tvm/issues/7258


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org