You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/12/19 06:09:14 UTC

[GitHub] [tvm] ninesheep opened a new pull request, #13641: [Bug][CodeGen,Cuda]fix cast fp16 to int8/uint8 in cuda

ninesheep opened a new pull request, #13641:
URL: https://github.com/apache/tvm/pull/13641

   construct a module like below:
   ```python
   import tvm
   from tvm import relay
   
   p0 = relay.var("p0", shape = [32,], dtype="float16")
   p1 = relay.var("p1", shape = [32,], dtype="float16")
   
   x1 = relay.multiply(p0, p1)
   x2 = relay.round(x1)
   x3 = relay.cast(x2, "uint8")  # or int8
   func = relay.Function(relay.analysis.free_vars(x2), x2)
   mod = tvm.IRModule.from_expr(func)
   
   with tvm.transform.PassContext(
       opt_level=3,
       ):
       graph, lib, params = tvm.relay.build_module.build(
           new_mod, target="cuda --host=llvm", params=None
       )
   ```
   
   and build it, it will report a error like this:
   ![image](https://user-images.githubusercontent.com/12847365/208359097-e55207c7-7b14-4b2e-bc07-3c6e524cf15d.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #13641: [Bug][CodeGen,Cuda]fix cast fp16 to int8/uint8 in cuda

Posted by GitBox <gi...@apache.org>.
tvm-bot commented on PR #13641:
URL: https://github.com/apache/tvm/pull/13641#issuecomment-1357147143

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   <!--bot-comment-ccs-start-->
    * No users to tag found in teams: `bug`, `codegen`, `cuda` <sub>See [#10317](https://github.com/apache/tvm/issues/10317) for details</sub><!--bot-comment-ccs-end-->
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] masahi merged pull request #13641: [Bug][CodeGen,Cuda]fix cast fp16 to int8/uint8 in cuda

Posted by GitBox <gi...@apache.org>.
masahi merged PR #13641:
URL: https://github.com/apache/tvm/pull/13641


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] ninesheep commented on pull request #13641: [Bug][CodeGen,Cuda]fix cast fp16 to int8/uint8 in cuda

Posted by GitBox <gi...@apache.org>.
ninesheep commented on PR #13641:
URL: https://github.com/apache/tvm/pull/13641#issuecomment-1362475429

   cc @wrongtest-intellif 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org