You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/03 02:43:16 UTC

[GitHub] [tvm] merrymercy opened a new pull request #7020: [AutoScheduler] Mics update to hardware parameter and task scheduler

merrymercy opened a new pull request #7020:
URL: https://github.com/apache/tvm/pull/7020


   - Expose all fields of hardware parameters to its constructor
   - Make `LogEstimatedLatency` as the default callback
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534760628



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       This is irrelevant to this PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534719806



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       How about exposing them under the `tvm/_ffi/runtime_ctypes.py` like 
   ```python
   @property
   def max_thread_dimensions(self):
   ```
   
   This will bring convenience when we want to get these values in our `HardwareParams` like (mali target)  




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534889899



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       Please take another look and approve. Thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on pull request #7020: [AutoScheduler] Mics update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#issuecomment-737630159


   cc @jcf94 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535492994



##########
File path: python/tvm/auto_scheduler/relay_integration.py
##########
@@ -342,3 +343,14 @@ def rewrite_compute_body(compute_tensor, new_layout):
     num = op_node.num_outputs
     outputs = tuple(op_node.output(i) for i in range(num))
     return outputs[0] if num == 1 else outputs
+
+
+def is_auto_scheduler_enabled():
+    """Return whether the auto-scheduler is enabled

Review comment:
       ```suggestion
       """Return whether the auto-scheduler is enabled.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534760628



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       This is irrelevant to this PR and auto-scheduler




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535429580



##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
     else:  # group_conv2d
         if layout == "NCHW":
             assert kernel_layout == "OIHW"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")

Review comment:
       `with autotvm`?

##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
     else:  # group_conv2d
         if layout == "NCHW":
             assert kernel_layout == "OIHW"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")
             strategy.add_implementation(
                 wrap_compute_conv2d(topi.nn.group_conv2d_nchw, has_groups=True),
                 wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
                 name="group_conv2d_nchw.generic",
             )
         elif layout == "NHWC":
             assert kernel_layout == "HWIO"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")

Review comment:
       `with autotvm`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy merged pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy merged pull request #7020:
URL: https://github.com/apache/tvm/pull/7020


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535444556



##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
     else:  # group_conv2d
         if layout == "NCHW":
             assert kernel_layout == "OIHW"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")
             strategy.add_implementation(
                 wrap_compute_conv2d(topi.nn.group_conv2d_nchw, has_groups=True),
                 wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
                 name="group_conv2d_nchw.generic",
             )
         elif layout == "NHWC":
             assert kernel_layout == "HWIO"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")

Review comment:
       ```suggestion
                   logger.warning("group_conv2d is not optimized for x86 with autotvm.")
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534760628



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       This is irrelevant to this PR.
   We can do it in other PRs.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535444853



##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
     else:  # group_conv2d
         if layout == "NCHW":
             assert kernel_layout == "OIHW"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")

Review comment:
       ```suggestion
                   logger.warning("group_conv2d is not optimized for x86 with autotvm.")
   ```

##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
     else:  # group_conv2d
         if layout == "NCHW":
             assert kernel_layout == "OIHW"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")
             strategy.add_implementation(
                 wrap_compute_conv2d(topi.nn.group_conv2d_nchw, has_groups=True),
                 wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
                 name="group_conv2d_nchw.generic",
             )
         elif layout == "NHWC":
             assert kernel_layout == "HWIO"
-            logger.warning("group_conv2d is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("group_conv2d is not optimized for x86.")

Review comment:
       ```suggestion
                   logger.warning("group_conv2d is not optimized for x86 with autotvm.")
   ```

##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -117,14 +118,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
             return conv2d_NCHWc_strategy_cpu(attrs, inputs, out_type, target)
         elif layout == "NHWC":
             assert kernel_layout == "HWIO"
+            if not is_auto_scheduler_enabled():
+                logger.warning("conv2d NHWC layout is not optimized for x86 in autotvm.")

Review comment:
       ```suggestion
                   logger.warning("conv2d NHWC layout is not optimized for x86 with autotvm.")
   ```

##########
File path: python/tvm/relay/op/strategy/x86.py
##########
@@ -117,14 +118,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
             return conv2d_NCHWc_strategy_cpu(attrs, inputs, out_type, target)
         elif layout == "NHWC":
             assert kernel_layout == "HWIO"
+            if not is_auto_scheduler_enabled():
+                logger.warning("conv2d NHWC layout is not optimized for x86 in autotvm.")
             strategy.add_implementation(
                 wrap_compute_conv2d(topi.nn.conv2d_nhwc, need_auto_scheduler_layout=True),
                 wrap_topi_schedule(topi.x86.schedule_conv2d_nhwc),
                 name="conv2d_nhwc.x86",
             )
         elif layout == "HWCN":
             assert kernel_layout == "HWIO"
-            logger.warning("conv2d HWCN layout is not optimized for x86.")
+            if not is_auto_scheduler_enabled():
+                logger.warning("conv2d HWCN layout is not optimized for x86 in autotvm.")

Review comment:
       ```suggestion
                   logger.warning("conv2d HWCN layout is not optimized for x86 with autotvm.")
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534731618



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       What I meant is we add code wrapper using `attr` like this:
   ```python
   @property
   def max_registers_per_block(self):
       """Return max registers per block"""
       return self._GetDeviceAttr(self.device_type, self.device_id, 9)
   ```
   Otherwise we access these property will not be convenient like other property `max_shared_memory_per_block`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534731618



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       What I meant is we add code wrapper using `attr` like this:
   ```python
   @property
   def max_registers_per_block(self):
       """Return max registers per block"""
       return self._GetDeviceAttr(self.device_type, self.device_id, 9)
   ```
   Otherwise we access these property will not be convenient like other property `max_shared_memory_per_block` supported before.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534721692



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       This is already supported by `VisitAttrs`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] FrozenGene commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

Posted by GitBox <gi...@apache.org>.
FrozenGene commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r534805176



##########
File path: include/tvm/auto_scheduler/search_task.h
##########
@@ -44,17 +44,16 @@ class HardwareParamsNode : public Object {
   int cache_line_bytes;
 
   // GPU related parameters got from device query API
-
-  /*! \brief The max shared memory per block. */
-  int max_shared_memory_per_block{INT32_MAX};
-  /*! \brief The max register memory per block. */
-  int max_registers_per_block{INT32_MAX};
-  /*! \brief The max threads per block. */
-  int max_threads_per_block{INT32_MAX};
+  /*! \brief The max shared memory per block in bytes. */
+  int max_shared_memory_per_block;
+  /*! \brief The max number of register per block. */
+  int max_registers_per_block;
+  /*! \brief The max number of threads per block. */
+  int max_threads_per_block;

Review comment:
       It is ok to me.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org