You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/11/17 02:04:49 UTC

[GitHub] [incubator-tvm] comaniac opened a new pull request #6927: [AutoScheduler] Register workload when deserializing tasks

comaniac opened a new pull request #6927:
URL: https://github.com/apache/incubator-tvm/pull/6927


   One issue of the current auto_scheduler task serialization is that the workload won't be registered when deserializing a task so users have to manually call `register_workload_tensors` after deserialization. However, `register_workload_tensors` always uses compute DAG hash as the workload key. This could cause workload key mismatching if the task was from a TE function that uses function name as the workload key. This PR makes the following changes to make sure `WORKLOAD_FUNC_REGISTRY` is consist after deserializing a task.
   
   * Improve the `register_workload_tensors` to take a function name.
   * Register the workload using the stored workload key when deserializing a task.
   
   cc @merrymercy @jcf94 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6927: [AutoScheduler] Register workload when deserializing tasks

Posted by GitBox <gi...@apache.org>.
comaniac commented on a change in pull request #6927:
URL: https://github.com/apache/incubator-tvm/pull/6927#discussion_r526376532



##########
File path: python/tvm/auto_scheduler/search_task.py
##########
@@ -63,6 +66,11 @@ def __getstate__(self):
     def __setstate__(self, state):
         self.dag = state["dag"]
         self.workload_key = state["workload_key"]
+        try:
+            func_name = json.loads(self.workload_key)[0]
+        except Exception:  # pylint: disable=broad-except
+            raise RuntimeError("Invalid workload key %s" % self.workload_key)
+        register_workload_tensors(func_name, self.dag.tensors)

Review comment:
       Changed the logic accordingly. Now we only register workloads extracted from Relay programs.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6927: [AutoScheduler] Register workload when deserializing tasks

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #6927:
URL: https://github.com/apache/incubator-tvm/pull/6927#discussion_r525709883



##########
File path: python/tvm/auto_scheduler/search_task.py
##########
@@ -63,6 +66,11 @@ def __getstate__(self):
     def __setstate__(self, state):
         self.dag = state["dag"]
         self.workload_key = state["workload_key"]
+        try:
+            func_name = json.loads(self.workload_key)[0]
+        except Exception:  # pylint: disable=broad-except
+            raise RuntimeError("Invalid workload key %s" % self.workload_key)
+        register_workload_tensors(func_name, self.dag.tensors)

Review comment:
       We should do more checks here.
   There are two kinds of workloads.
   You can find the note here 
   https://github.com/apache/incubator-tvm/blob/51d81fb71781eb606f4b563ec28290dbd8d04faf/python/tvm/auto_scheduler/workload_registry.py#L42-L50
   
   We have to handle two cases correctly without incorrect overwriting
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] merrymercy merged pull request #6927: [AutoScheduler] Register workload when deserializing tasks

Posted by GitBox <gi...@apache.org>.
merrymercy merged pull request #6927:
URL: https://github.com/apache/incubator-tvm/pull/6927


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6927: [AutoScheduler] Register workload when deserializing tasks

Posted by GitBox <gi...@apache.org>.
merrymercy commented on a change in pull request #6927:
URL: https://github.com/apache/incubator-tvm/pull/6927#discussion_r525709883



##########
File path: python/tvm/auto_scheduler/search_task.py
##########
@@ -63,6 +66,11 @@ def __getstate__(self):
     def __setstate__(self, state):
         self.dag = state["dag"]
         self.workload_key = state["workload_key"]
+        try:
+            func_name = json.loads(self.workload_key)[0]
+        except Exception:  # pylint: disable=broad-except
+            raise RuntimeError("Invalid workload key %s" % self.workload_key)
+        register_workload_tensors(func_name, self.dag.tensors)

Review comment:
       We should do more checks here.
   There are two kinds of workloads.
   You can find the note here 
   https://github.com/apache/incubator-tvm/blob/51d81fb71781eb606f4b563ec28290dbd8d04faf/python/tvm/auto_scheduler/workload_registry.py#L42-L52
   
   We have to handle two cases correctly without incorrect overwriting
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org