You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2021/01/27 02:00:08 UTC

[GitHub] [tvm] comaniac opened a new pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

comaniac opened a new pull request #7344:
URL: https://github.com/apache/tvm/pull/7344


   This is the follow up PR for #7317 to enable the schedule sharing in the auto_scheduler diaptch context.
   
   cc @merrymercy @jcf94 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] jcf94 commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
jcf94 commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767986306


   > > @comaniac Does this mean tuning support for dynamic workload (dynamic batch size etc) is coming soon? I'm very excited for this, that would tremendously help my MaskRCNN!!
   > 
   > Ah this is not the perfect solution for dynamic shape. This is more like a solution to make tuned logs more useful. For example, you can apply the tuning log with batch 1 to all batch sizes. You can even tune several batch sizes in prime numbers to achieve better performance to their multiples. Meanwhile, we do work on the dynamic shape support in auto_scheduler, but it may not be ready to be upstreamed before this summer or fall.
   
   😃  Looking forward to the dynamic shape support, too! It will be great useful.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767987598


   Yes, I believe dynamic tuning and codegen is one of the biggest challenges of TVM this year. I'm glad at least there are folks looking at the problem. MaskRCNN should serve as a good benchmark, it has both dynamic dense (very large) and dynamic conv2d + conv2d transpose.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on a change in pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#discussion_r565517145



##########
File path: python/tvm/auto_scheduler/dispatcher.py
##########
@@ -126,18 +127,53 @@ class ApplyHistoryBest(DispatchContext):
         If is str, then it should be the filename of a records log file.
         Each row of this file is an encoded record pair. Otherwise, it is an iterator.
     n_lines: Optional[int]
-        if it is not None, only load the first `n_lines` lines of log
+        if it is not None, only load the first `n_lines` lines of log.
+    include_compatible: bool
+        When set to True, compatible records will also be considered.
     """
 
-    def __init__(self, records, n_lines=None):
+    def __init__(self, records, n_lines=None, include_compatible=False):
         super(ApplyHistoryBest, self).__init__()
+        self.include_compatible = include_compatible
 
+        # Dict[str (target key),
+        #   Dict[str (workload hash),
+        #     Dict[tuple (workload args), tuple (State, cost)]]]
         self.best_by_targetkey = {}
         self.best_by_model = {}
         self._best_user_defined = {}
 
         self.load(records, n_lines)
 
+    @staticmethod
+    def get_workload_entry(best_records, target_key, workload_key):
+        """Get the entry of the target key and workload key hash in the given best record map.
+
+        Parameters
+        ----------
+        best_records: Dict[str, Dict[str, Dict[str, Any]]]
+            The best record map.
+        target_key: str
+            The first key to the best_records.
+        workload_key: str
+            The workload key that can be decoded to workload hash and args.
+
+        Returns
+        -------
+        entry: Dict[str, Any]
+            The entry in best_records with target key and workload hash.
+        workload_hash: str
+            The workload hash.

Review comment:
       ```suggestion
               The workload hash decoded from workload_key
   ```

##########
File path: python/tvm/auto_scheduler/dispatcher.py
##########
@@ -126,18 +127,53 @@ class ApplyHistoryBest(DispatchContext):
         If is str, then it should be the filename of a records log file.
         Each row of this file is an encoded record pair. Otherwise, it is an iterator.
     n_lines: Optional[int]
-        if it is not None, only load the first `n_lines` lines of log
+        if it is not None, only load the first `n_lines` lines of log.
+    include_compatible: bool
+        When set to True, compatible records will also be considered.
     """
 
-    def __init__(self, records, n_lines=None):
+    def __init__(self, records, n_lines=None, include_compatible=False):
         super(ApplyHistoryBest, self).__init__()
+        self.include_compatible = include_compatible
 
+        # Dict[str (target key),
+        #   Dict[str (workload hash),
+        #     Dict[tuple (workload args), tuple (State, cost)]]]
         self.best_by_targetkey = {}
         self.best_by_model = {}
         self._best_user_defined = {}
 
         self.load(records, n_lines)
 
+    @staticmethod
+    def get_workload_entry(best_records, target_key, workload_key):
+        """Get the entry of the target key and workload key hash in the given best record map.
+
+        Parameters
+        ----------
+        best_records: Dict[str, Dict[str, Dict[str, Any]]]
+            The best record map.
+        target_key: str
+            The first key to the best_records.
+        workload_key: str
+            The workload key that can be decoded to workload hash and args.
+
+        Returns
+        -------
+        entry: Dict[str, Any]
+            The entry in best_records with target key and workload hash.
+        workload_hash: str
+            The workload hash.

Review comment:
       ```suggestion
               The workload hash decoded from workload_key.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi edited a comment on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
masahi edited a comment on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767987598


   Yes, I believe dynamic tuning and codegen is one of the biggest challenges of TVM this year. I'm glad at least there are folks looking at the problem. 
   
   MaskRCNN should serve as a good benchmark, it has both dynamic dense (very large) and dynamic conv2d + conv2d transpose. All of them are current bottleneck, without tuning them I cannot beat pytorch.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac merged pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
comaniac merged pull request #7344:
URL: https://github.com/apache/tvm/pull/7344


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
comaniac commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767974936


   > @comaniac Does this mean tuning support for dynamic workload (dynamic batch size etc) is coming soon? I'm very excited for this, that would tremendously help my MaskRCNN!!
   
   Ah this is not the perfect solution for dynamic shape. This is more like a solution to make tuned logs more useful. For example, you can apply the tuning log with batch 1 to all batch sizes. You can even tune several batch sizes in prime numbers to achieve better performance to their multiples. Meanwhile, we do work on the dynamic shape support in auto_scheduler, but it may not be ready to be upstreamed before this summer or fall.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] masahi commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
masahi commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767970847


   @comaniac Does this mean tuning support for dynamic workload (dynamic batch size etc) is coming soon? I'm very excited for this, that would tremendously help my MaskRCNN!!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [tvm] comaniac commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

Posted by GitBox <gi...@apache.org>.
comaniac commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-768632418


   Thanks @merrymercy 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org