You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/12/05 11:41:58 UTC

[GitHub] [tvm] ninesheep opened a new pull request, #13551: [TOPI][CUDA][Fix Bug]fix the bug of schedule batch_matmul_int8 on cuda

ninesheep opened a new pull request, #13551:
URL: https://github.com/apache/tvm/pull/13551

   construct a graph like below:
   ``` python
       from tvm import relay
       p0 = relay.var("p0", shape = [384,144,32], dtype="int8")
       p1 = relay.var("p1", shape = [384,144,32], dtype="int8")
       x1 = relay.nn.batch_matmul(p0, p1, transpose_b=True, out_dtype="int32")
       x2 = relay.cast(x1, "int64")
       func = relay.Function(relay.analysis.free_vars(x2), x2)
       mod =  tvm.IRModule.from_expr(func)
   
       with tvm.transform.PassContext(
           opt_level=3,
       ):
           graph, lib, params = tvm.relay.build_module.build(
                new_mod, target="cuda --host=llvm", params=None
           )
   
   ````
    and build it, it will report a error like this:
   `
   Check failed: (!repl_op.same_as(s->op)) is false: Cannot find Tensor(shape=[384, 144, 32], op.name=compute) in the inputs of compute(T_cast, body=[int8(compute[ax0, ax1, ax2])], axis=[iter_var(ax0,     range(min=0, ext=384)), iter_var(ax1, range(min=0, ext=144)), iter_var(ax2, range(min=0, ext=144))], reduce_axis=[], tag=elemwise, attrs={})
   `
    


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] tvm-bot commented on pull request #13551: [TOPI][CUDA][Fix Bug]fix the bug of schedule batch_matmul_int8 on cuda

Posted by GitBox <gi...@apache.org>.
tvm-bot commented on PR #13551:
URL: https://github.com/apache/tvm/pull/13551#issuecomment-1337190312

   <!---bot-comment-->
   
   Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @-ing them in a comment.
   
   <!--bot-comment-ccs-start-->
    * No users to tag found in teams: `topi`, `cuda`, `fix bug` <sub>See [#10317](https://github.com/apache/tvm/issues/10317) for details</sub><!--bot-comment-ccs-end-->
   
   <sub>Generated by [tvm-bot](https://github.com/apache/tvm/blob/main/ci/README.md#github-actions)</sub>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [tvm] masahi merged pull request #13551: [Fix Bug]fix the bug of schedule batch_matmul_int8 on cuda

Posted by GitBox <gi...@apache.org>.
masahi merged PR #13551:
URL: https://github.com/apache/tvm/pull/13551


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org