You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/03/10 19:30:43 UTC

[GitHub] [incubator-tvm] anijain2305 opened a new pull request #5031: [CUDA] Op strategy changes for Int8 schedules.

anijain2305 opened a new pull request #5031: [CUDA] Op strategy changes for Int8 schedules.
URL: https://github.com/apache/incubator-tvm/pull/5031
 
 
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] icemelon9 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86

Posted by GitBox <gi...@apache.org>.
icemelon9 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86
URL: https://github.com/apache/incubator-tvm/pull/5031#issuecomment-598295601
 
 
   Thanks @anijain2305 @kevinthesun @vinx13. This is now merged.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] vinx13 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86

Posted by GitBox <gi...@apache.org>.
vinx13 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86
URL: https://github.com/apache/incubator-tvm/pull/5031#issuecomment-597992896
 
 
   I think padding channels would be helpful, it would be good if we have comparison result (channel padding + int8 template vs direct template)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #5031: [Strategy] Support for Int8 schedules - CUDA/x86

Posted by GitBox <gi...@apache.org>.
icemelon9 commented on a change in pull request #5031: [Strategy] Support for Int8 schedules - CUDA/x86
URL: https://github.com/apache/incubator-tvm/pull/5031#discussion_r391211016
 
 

 ##########
 File path: topi/python/topi/cuda/conv2d_int8.py
 ##########
 @@ -23,10 +24,22 @@
 from .injective import schedule_injective_from_existing
 from .tensor_intrin import dp4a
 from ..nn.pad import pad
+from ..nn.conv2d import unpack_NCHWc_to_nchw
 from ..nn.util import get_pad_tuple
 from ..util import get_const_tuple, traverse_inline
 
 
+def conv2d_nchw_int8(data, kernel, strides, padding, dilation, out_dtype='int32'):
 
 Review comment:
   add a function doc 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] anijain2305 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86

Posted by GitBox <gi...@apache.org>.
anijain2305 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86
URL: https://github.com/apache/incubator-tvm/pull/5031#issuecomment-597761418
 
 
   @vinx13 Adding you as well. Because I have padded C dim for GPU using Legalize to use DP4A schedules. Otherwise, we will have to put a check in strategy.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] icemelon9 merged pull request #5031: [Strategy] Support for Int8 schedules - CUDA/x86

Posted by GitBox <gi...@apache.org>.
icemelon9 merged pull request #5031: [Strategy] Support for Int8 schedules - CUDA/x86
URL: https://github.com/apache/incubator-tvm/pull/5031
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] icemelon9 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86

Posted by GitBox <gi...@apache.org>.
icemelon9 commented on issue #5031: [Strategy] Support for Int8 schedules - CUDA/x86
URL: https://github.com/apache/incubator-tvm/pull/5031#issuecomment-597951311
 
 
   Could you add a few tests for conv2d_nchw_int8 in the topi/tests/python/test_topi_conv2d_int8.py?
   
   otherwise, lgtm

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services