You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/12/16 05:22:46 UTC

[GitHub] [tvm] echuraev commented on a diff in pull request #13627: [docs] Add "Open with Colab" button to documentation

echuraev commented on code in PR #13627:
URL: https://github.com/apache/tvm/pull/13627#discussion_r1050370999


##########
docs/conf.py:
##########
@@ -249,76 +343,76 @@ def git_describe_version(original_version):
 # The unlisted files are sorted by filenames.
 # The unlisted files always appear after listed files.
 within_subsection_order = {
-    "tutorial": [
-        "introduction.py",
-        "install.py",
-        "tvmc_command_line_driver.py",
-        "tvmc_python.py",
-        "autotvm_relay_x86.py",
-        "tensor_expr_get_started.py",
-        "autotvm_matmul_x86.py",
-        "auto_scheduler_matmul_x86.py",
-        "tensor_ir_blitz_course.py",
-        "topi.pi",
-        "cross_compilation_and_rpc.py",
-        "relay_quick_start.py",
-        "uma.py",
-    ],
-    "compile_models": [
-        "from_pytorch.py",
-        "from_tensorflow.py",
-        "from_mxnet.py",
-        "from_onnx.py",
-        "from_keras.py",
-        "from_tflite.py",
-        "from_coreml.py",
-        "from_darknet.py",
-        "from_caffe2.py",
-        "from_paddle.py",
-    ],
-    "work_with_schedules": [
-        "schedule_primitives.py",
-        "reduction.py",
-        "intrin_math.py",
-        "scan.py",
-        "extern_op.py",
-        "tensorize.py",
-        "tuple_inputs.py",
-        "tedd.py",
-    ],
-    "optimize_operators": [
-        "opt_gemm.py",
-        "opt_conv_cuda.py",
-        "opt_conv_tensorcore.py",
-    ],
-    "tune_with_autotvm": [
-        "tune_conv2d_cuda.py",
-        "tune_relay_cuda.py",
-        "tune_relay_x86.py",
-        "tune_relay_arm.py",
-        "tune_relay_mobile_gpu.py",
-    ],
-    "tune_with_autoscheduler": [
-        "tune_matmul_x86.py",
-        "tune_conv2d_layer_cuda.py",
-        "tune_network_x86.py",
-        "tune_network_cuda.py",
-    ],
-    "extend_tvm": [
-        "low_level_custom_pass.py",
-        "use_pass_infra.py",
-        "use_pass_instrument.py",
-        "bring_your_own_datatypes.py",
-    ],
+    # "tutorial": [

Review Comment:
   Why did you comment it?



##########
docs/conf.py:
##########
@@ -84,6 +84,100 @@ def git_describe_version(original_version):
 version = git_describe_version(tvm.__version__)
 release = version
 
+
+# Generate the
+COLAB_HTML_HEADER = """
+.. DO NOT EDIT.
+.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
+.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
+.. "{0}"
+.. LINE NUMBERS ARE GIVEN BELOW.
+
+.. only:: html
+
+    .. note::
+        :class: sphx-glr-download-link-note
+
+        This tutorial can be used interactively with Google Colab! You can also click
+        :ref:`here <sphx_glr_download_{1}>` to run the Jupyter notebook locally.
+
+        .. image:: https://raw.githubusercontent.com/guberti/web-data/main/images/utilities/colab_button.svg

Review Comment:
   Not sure about this path to the image. Probably, we should upload this image file to https://github.com/tlc-pack/web-data



##########
gallery/how_to/work_with_microtvm/install_zephyr.py:
##########
@@ -0,0 +1,29 @@
+%%shell

Review Comment:
   Probably it would be better to format this script as a documentation file in sphinx format?



##########
tests/lint/check_request_hook.py:
##########
@@ -142,7 +142,7 @@ def find_code_block_line(lines: List[str]) -> Optional[int]:
                 else:
                     actual, expected = line_info
                     print(f"{file} (misplaced hook at {actual}, expected at {expected})")
-            exit(1)
+            #exit(1)

Review Comment:
   Why did you comment this line?



##########
docs/conf.py:
##########
@@ -84,6 +84,100 @@ def git_describe_version(original_version):
 version = git_describe_version(tvm.__version__)
 release = version
 
+
+# Generate the
+COLAB_HTML_HEADER = """
+.. DO NOT EDIT.
+.. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY.
+.. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE:
+.. "{0}"
+.. LINE NUMBERS ARE GIVEN BELOW.
+
+.. only:: html
+
+    .. note::
+        :class: sphx-glr-download-link-note
+
+        This tutorial can be used interactively with Google Colab! You can also click
+        :ref:`here <sphx_glr_download_{1}>` to run the Jupyter notebook locally.
+
+        .. image:: https://raw.githubusercontent.com/guberti/web-data/main/images/utilities/colab_button.svg

Review Comment:
   While I had reviewed your PR, I found that the same image file was already used in `gallery/how_to/work_with_microtvm/micro_train.py`. So probably, you can reuse this link: https://raw.githubusercontent.com/tlc-pack/web-data/main/images/utilities/colab_button.png



##########
docs/Makefile:
##########
@@ -20,7 +20,7 @@
 
 # You can set these variables from the command line.
 SPHINXOPTS    =
-SPHINXBUILD   = python3 -m sphinx
+SPHINXBUILD   = python3.7 -m sphinx

Review Comment:
   Why we need to specify exact version of python instead of `python3`?



##########
gallery/how_to/work_with_microtvm/micro_aot.py:
##########
@@ -30,6 +30,29 @@
 or on Zephyr platform on a microcontroller/board supported by Zephyr.
 """
 
+######################################################################
+#
+#     .. include:: ../../../../gallery/how_to/work_with_microtvm/install_zephyr.rst

Review Comment:
   Tried to find where else you have included this `install_zephyr.rst`. But didn't find it. Probably you can just move the context of `install_zephyr.rst` to this file?



##########
gallery/how_to/work_with_microtvm/install_zephyr.rst:
##########
@@ -0,0 +1,34 @@
+Install the Prerequisites
+----------------------------
+
+    .. code-block:: bash

Review Comment:
   The same question here. Probably we should design this file as a documentation and add more text information? Because it looks more like a bash script for now.



##########
gallery/how_to/work_with_microtvm/micro_train.py:
##########
@@ -71,7 +60,7 @@
 #
 #     .. code-block:: bash
 #
-#       %%bash
+#       %%shell

Review Comment:
   What is the difference between `%%bash` and `%%shell`?



##########
apps/microtvm/poetry.lock:
##########
@@ -1,28 +1,9 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-# This `poetry.lock` file is generated from `poetry install`.
-
 [[package]]
 name = "absl-py"
-version = "1.2.0"
+version = "1.3.0"

Review Comment:
   How these changes in `apps/microtvm` are related to adding "Open with Colab" button?



##########
gallery/how_to/compile_models/from_keras.py:
##########
@@ -102,8 +102,8 @@
 shape_dict = {"input_1": data.shape}
 mod, params = relay.frontend.from_keras(keras_resnet50, shape_dict)
 # compile the model
-target = "cuda"
-dev = tvm.cuda(0)
+target = "llvm"
+dev = tvm.cpu(0)

Review Comment:
   I know that colab provides several runtimes (`CPU` or `GPU`), which can be selected manually. Can we automatically select preferable runtime for some scripts, e.g. where we want to use `GPU`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org