You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by "masahi (via GitHub)" <gi...@apache.org> on 2023/07/31 10:36:04 UTC

[GitHub] [tvm] masahi commented on a diff in pull request #15439: [Vulkan] Add spirv shuffle instruction support

masahi commented on code in PR #15439:
URL: https://github.com/apache/tvm/pull/15439#discussion_r1279109603


##########
src/tir/transforms/lower_thread_allreduce.cc:
##########
@@ -719,12 +719,16 @@ class ThreadAllreduceBuilder final : public StmtExprMutator {
   // Also, the warp/wavefront size differs (64 on rocm, 32 on cuda and metal).
   bool IsWarpReduction(const std::vector<DataType>& types, int group_extent, int reduce_extent,
                        int contiguous_reduce_extent) {
-    if ((target_->kind->name != "cuda") && (target_->kind->name != "rocm") &&
-        (target_->kind->name != "metal")) {
+    if (target_->kind->name == "vulkan") {
+      if (target_->GetAttr<Integer>("supported_subgroup_operations") == 0) {

Review Comment:
   Here, ideally we should check the availability of each subgroup feature by bit mask such as `VK_SUBGROUP_FEATURE_SHUFFLE_BIT`, and `VK_SUBGROUP_FEATURE_SHUFFLE_RELATIVE_BIT`. But we cannot include a vulkan header in this file, so for simplicity I'm assuming that non-zero `supported_subgroup_operations` implies most common shuffle operations are supported. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org