You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/02/25 03:13:44 UTC

[GitHub] [tvm] hzwangjl opened a new issue #7529: DLTensor from float32 to float16 in cpp deploy

hzwangjl opened a new issue #7529:
URL: https://github.com/apache/tvm/issues/7529


   I have relay.build_module.build the model to fp16, but during cpp deploy, need to convert image data to fp16. I tried the following method, but it didn't work. Is there any efficient method in TVM? Thank you~
   ```
   DLTensor *fp32_tensor;
   DLTensor *fp16_tensor;
   TVMArrayAlloc(shape, ndim, KDLFloat, 32, lanes, KDLCPU, 0,  &fp32_tensor);
   TVMArrayAlloc(shape, ndim, KDLFloat, 16, lanes, KDLGPU, 0,  &fp32_tensor);
   memcpy(fp32_tensor->data, data_cpu, size);
   TVMArrayCopyFromTo(fp32_tensor, fp16_tensor);
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jinfagang commented on issue #7529: DLTensor from float32 to float16 in cpp deploy

Posted by GitBox <gi...@apache.org>.
jinfagang commented on issue #7529:
URL: https://github.com/apache/tvm/issues/7529#issuecomment-1080127813


   @jellywong88 Did u able to do it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen commented on issue #7529: DLTensor from float32 to float16 in cpp deploy

Posted by GitBox <gi...@apache.org>.
tqchen commented on issue #7529:
URL: https://github.com/apache/tvm/issues/7529#issuecomment-789798630


   @hzwangjl an explicit converstion method would be necessary.  Normally there are intrinsics on the device that are able to do so. The community uses https://discuss.tvm.apache.org/ for this type of questions, let us follow up there


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] tqchen closed issue #7529: DLTensor from float32 to float16 in cpp deploy

Posted by GitBox <gi...@apache.org>.
tqchen closed issue #7529:
URL: https://github.com/apache/tvm/issues/7529


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org