You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/12/25 23:31:23 UTC

[GitHub] [tvm] merrymercy commented on a change in pull request #7167: [Doc][AutoScheduler] Improve hyperlinks in tutorials

merrymercy commented on a change in pull request #7167:
URL: https://github.com/apache/tvm/pull/7167#discussion_r548921916



##########
File path: tutorials/auto_scheduler/tune_network_mali.py
##########
@@ -339,9 +349,14 @@ def tune_and_evaluate():
 # 1. During the tuning, the auto-scheduler needs to compile many programs and
 #    extract feature from them. This part is CPU-intensive,
 #    so a high-performance CPU with many cores is recommended for faster search.
-# 2. If you have multiple target devices, you can use all of them for measurements to
-#    parallelize the measurements. Check this :ref:`section <tutorials-autotvm-rpc-tracker>`
+# 2. You can use :code:`python3 -m tvm.auto_scheduler.measure_record --mode distill --i log.json`
+#    to distill the large log file and only save the best useful records.
+# 3. You can resume a search from the previous log file. You just need to
+#    add a new argument :code:`load_log_file` when creating the task scheduler
+#    in function :code:`run_tuning`. Say,
+#    :code:`tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)`
+# 4. If you have multiple target CPUs, you can use all of them for measurements to

Review comment:
       ```suggestion
   # 4. If you have multiple target GPUs, you can use all of them for measurements to
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org