You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2020/10/12 12:29:08 UTC

[incubator-tvm] branch main updated: [CI] Move to use main as the default (#6665)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/main by this push:
     new 0cdd285  [CI] Move to use main as the default (#6665)
0cdd285 is described below

commit 0cdd285abe58ac373c60c7544178888a47902d0d
Author: Tianqi Chen <tq...@users.noreply.github.com>
AuthorDate: Mon Oct 12 08:28:51 2020 -0400

    [CI] Move to use main as the default (#6665)
---
 Jenkinsfile                                        |  8 +++---
 Makefile                                           |  2 +-
 NEWS.md                                            |  4 +--
 README.md                                          |  4 +--
 apps/android_deploy/README.md                      |  4 +--
 apps/android_rpc/README.md                         |  6 ++---
 apps/benchmark/README.md                           |  2 +-
 apps/ios_rpc/tests/ios_rpc_mobilenet.py            |  2 +-
 apps/wasm-standalone/README.md                     |  4 +--
 docker/lint.sh                                     |  4 +--
 docs/conf.py                                       |  2 +-
 docs/contribute/code_guide.rst                     |  2 +-
 docs/contribute/community.rst                      |  2 +-
 docs/contribute/document.rst                       |  4 +--
 docs/contribute/git_howto.rst                      | 26 +++++++++----------
 docs/contribute/pull_request.rst                   |  8 +++---
 docs/contribute/release_process.rst                |  8 +++---
 docs/deploy/android.rst                            |  4 +--
 docs/deploy/cpp_deploy.rst                         | 10 ++++----
 docs/dev/frontend/tensorflow.rst                   |  4 +--
 docs/dev/index.rst                                 |  4 +--
 docs/dev/inferbound.rst                            | 30 +++++++++++-----------
 docs/dev/pass_infra.rst                            | 20 +++++++--------
 docs/dev/relay_add_pass.rst                        |  6 ++---
 docs/dev/relay_bring_your_own_codegen.rst          |  2 +-
 docs/dev/relay_intro.rst                           |  6 ++---
 docs/dev/runtime.rst                               | 22 ++++++++--------
 docs/dev/virtual_machine.rst                       | 18 ++++++-------
 docs/install/docker.rst                            |  2 +-
 docs/langref/relay_pattern.rst                     | 10 ++++----
 docs/vta/dev/hardware.rst                          | 12 ++++-----
 docs/vta/dev/index.rst                             |  2 +-
 docs/vta/index.rst                                 |  2 +-
 docs/vta/install.rst                               |  2 +-
 jvm/README.md                                      |  2 +-
 jvm/pom.xml                                        |  2 +-
 python/tvm/contrib/hexagon.py                      |  4 +--
 python/tvm/relay/testing/dcgan.py                  |  2 +-
 python/tvm/relay/testing/tf.py                     |  2 +-
 python/tvm/rpc/server.py                           |  2 +-
 rust/tvm/README.md                                 |  2 +-
 rust/tvm/examples/resnet/src/build_resnet.py       |  2 +-
 src/relay/backend/compile_engine.cc                | 30 +++++++++++-----------
 src/relay/backend/vm/compiler.cc                   |  2 +-
 src/relay/transforms/fuse_ops.cc                   | 18 ++++++-------
 src/runtime/thread_pool.cc                         |  8 +++---
 src/runtime/threading_backend.cc                   | 12 ++++-----
 tests/lint/clang_format.sh                         |  6 ++---
 tests/lint/git-black.sh                            |  2 +-
 tests/lint/git-clang-format.sh                     |  2 +-
 tests/lint/python_format.sh                        |  2 +-
 tests/python/contrib/test_ethosn/infrastructure.py |  2 +-
 tests/python/driver/tvmc/conftest.py               |  2 +-
 tests/python/frontend/darknet/test_forward.py      |  2 +-
 tests/python/frontend/mxnet/model_zoo/dcgan.py     |  2 +-
 tests/python/frontend/pytorch/qnn_test.py          |  2 +-
 tests/python/frontend/tflite/test_forward.py       | 14 +++++-----
 tutorials/autotvm/tune_relay_arm.py                |  4 +--
 tutorials/autotvm/tune_relay_cuda.py               |  2 +-
 tutorials/autotvm/tune_relay_mobile_gpu.py         |  4 +--
 tutorials/dev/bring_your_own_datatypes.py          |  4 +--
 tutorials/frontend/deploy_model_on_android.py      |  6 ++---
 tutorials/frontend/deploy_model_on_rasp.py         |  2 +-
 tutorials/frontend/deploy_prequantized.py          |  2 +-
 tutorials/frontend/deploy_prequantized_tflite.py   |  2 +-
 tutorials/frontend/deploy_ssd_gluoncv.py           |  2 +-
 tutorials/frontend/from_caffe2.py                  |  2 +-
 tutorials/frontend/from_coreml.py                  |  2 +-
 tutorials/frontend/from_darknet.py                 |  2 +-
 tutorials/frontend/from_keras.py                   |  2 +-
 tutorials/frontend/from_mxnet.py                   |  2 +-
 tutorials/frontend/from_onnx.py                    |  2 +-
 tutorials/frontend/from_pytorch.py                 |  2 +-
 tutorials/frontend/from_tensorflow.py              |  2 +-
 tutorials/frontend/from_tflite.py                  |  2 +-
 tutorials/get_started/relay_quick_start.py         |  2 +-
 tutorials/language/tedd.py                         |  6 ++---
 tutorials/optimize/opt_conv_cuda.py                |  6 ++---
 tutorials/optimize/opt_gemm.py                     |  2 +-
 vta/tutorials/autotvm/tune_relay_vta.py            |  2 +-
 vta/tutorials/frontend/deploy_classification.py    |  2 +-
 vta/tutorials/frontend/legacy/deploy_detection.py  |  2 +-
 vta/tutorials/matrix_multiply.py                   |  6 ++---
 vta/tutorials/optimize/convolution_opt.py          |  6 ++---
 vta/tutorials/optimize/matrix_multiply_opt.py      |  4 +--
 vta/tutorials/vta_get_started.py                   |  2 +-
 web/README.md                                      |  4 +--
 87 files changed, 231 insertions(+), 231 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 4f7729a..207ffe7 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -94,8 +94,8 @@ def init_git_win() {
 }
 
 def cancel_previous_build() {
-    // cancel previous build if it is not on master.
-    if (env.BRANCH_NAME != "master") {
+    // cancel previous build if it is not on main.
+    if (env.BRANCH_NAME != "main") {
         def buildNumber = env.BUILD_NUMBER as int
         // Milestone API allows us to cancel previous build
         // with the same milestone number
@@ -328,14 +328,14 @@ stage('Build packages') {
     }
   }
   // Here we could upload the packages to anaconda for releases
-  // and/or the master branch
+  // and/or the main branch
 }
 */
 
 stage('Deploy') {
     node('doc') {
       ws(per_exec_ws("tvm/deploy-docs")) {
-        if (env.BRANCH_NAME == "master") {
+        if (env.BRANCH_NAME == "main") {
            unpack_lib('mydocs', 'docs.tgz')
            sh "cp docs.tgz /var/docs/docs.tgz"
            sh "tar xf docs.tgz -C /var/docs"
diff --git a/Makefile b/Makefile
index 0896246..011dc5c 100644
--- a/Makefile
+++ b/Makefile
@@ -136,7 +136,7 @@ jvminstall:
 			-Dcflags="$(PKG_CFLAGS)" -Dldflags="$(PKG_LDFLAGS)" \
 			-Dcurrent_libdir="$(ROOTDIR)/$(OUTPUTDIR)" $(JVM_TEST_ARGS))
 format:
-	./tests/lint/git-clang-format.sh -i origin/master
+	./tests/lint/git-clang-format.sh -i origin/main
 	black .
 	cd rust; which cargo && cargo fmt --all; cd ..
 
diff --git a/NEWS.md b/NEWS.md
index 5554727..4c9bde0 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -2190,7 +2190,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
 * Fix benchmark layout in graph tuner (#3926)
 * Fix Android Demo LLVM version (#3962)
 * Cast filepath arguments to string (#3968)
-* Fixes "common" sub crate using nightly and master (#3965)
+* Fixes "common" sub crate using nightly and main (#3965)
 * Changes to make tensorize work. These changes also fix the previously broken test. (#3981)
 * Remove FLOP computation when calling 3rd party library (#4005)
 * Use a more intuitive way to limit the #ops in a group (#4018)
@@ -2254,7 +2254,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
 
 
 ### Depreciations
-* Deprecating python2 support in the master branch and following release (v0.6). (#2994, #2986)
+* Deprecating python2 support and following release (v0.6). (#2994, #2986)
 * NNVM is deprecated and will be removed in a future version. (#4333, #4368)
 
 
diff --git a/README.md b/README.md
index f0e011b..779487e 100644
--- a/README.md
+++ b/README.md
@@ -15,14 +15,14 @@
 <!--- specific language governing permissions and limitations -->
 <!--- under the License. -->
 
-<img src=https://raw.githubusercontent.com/apache/incubator-tvm-site/master/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
+<img src=https://raw.githubusercontent.com/apache/incubator-tvm-site/main/images/logo/tvm-logo-small.png width=128/> Open Deep Learning Compiler Stack
 ==============================================
 [Documentation](https://tvm.apache.org/docs) |
 [Contributors](CONTRIBUTORS.md) |
 [Community](https://tvm.apache.org/community) |
 [Release Notes](NEWS.md)
 
-[![Build Status](https://ci.tlcpack.ai/buildStatus/icon?job=tvm/master)](https://ci.tlcpack.ai/job/tvm/job/master/)
+[![Build Status](https://ci.tlcpack.ai/buildStatus/icon?job=tvm/main)](https://ci.tlcpack.ai/job/tvm/job/main/)
 [![WinMacBuild](https://github.com/apache/incubator-tvm/workflows/WinMacBuild/badge.svg)](https://github.com/apache/incubator-tvm/actions?query=workflow%3AWinMacBuild)
 
 Apache TVM (incubating) is a compiler stack for deep learning systems. It is designed to close the gap between the
diff --git a/apps/android_deploy/README.md b/apps/android_deploy/README.md
index 5d6ad88..d5efba8 100644
--- a/apps/android_deploy/README.md
+++ b/apps/android_deploy/README.md
@@ -34,7 +34,7 @@ Alternatively, you may execute Docker image we provide which contains the requir
 
 ### Build APK
 
-Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary.
+Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/main/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary.
 
 ```
 dependencies {
@@ -124,7 +124,7 @@ If everything goes well, you will find compile tools in `/opt/android-toolchain-
 
 Follow instruction to get compiled version model for android target [here.](https://tvm.apache.org/docs/deploy/android.html)
 
-Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81)
+Copied these compiled model deploy_lib.so, deploy_graph.json and deploy_param.params to apps/android_deploy/app/src/main/assets/ and modify TVM flavor changes on [java](https://github.com/apache/incubator-tvm/blob/main/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java#L81)
 
 `CPU Verison flavor`
 ```
diff --git a/apps/android_rpc/README.md b/apps/android_rpc/README.md
index 96c762f..29962d3 100644
--- a/apps/android_rpc/README.md
+++ b/apps/android_rpc/README.md
@@ -28,7 +28,7 @@ You will need JDK, [Android NDK](https://developer.android.com/ndk) and an Andro
 
 We use [Gradle](https://gradle.org) to build. Please follow [the installation instruction](https://gradle.org/install) for your operating system.
 
-Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/master/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary.
+Before you build the Android application, please refer to [TVM4J Installation Guide](https://github.com/apache/incubator-tvm/blob/main/jvm/README.md) and install tvm4j-core to your local maven repository. You can find tvm4j dependency declare in `app/build.gradle`. Modify it if it is necessary.
 
 ```
 dependencies {
@@ -146,7 +146,7 @@ android   1      1     0
 ```
 
 
-Then checkout [android\_rpc/tests/android\_rpc\_test.py](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py) and run,
+Then checkout [android\_rpc/tests/android\_rpc\_test.py](https://github.com/apache/incubator-tvm/blob/main/apps/android_rpc/tests/android_rpc_test.py) and run,
 
 ```bash
 # Specify the RPC tracker
@@ -157,7 +157,7 @@ export TVM_NDK_CC=/opt/android-toolchain-arm64/bin/aarch64-linux-android-g++
 python android_rpc_test.py
 ```
 
-This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run vector addition on your Android device. To verify compiled TVM IR shared libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set `'test_vulkan = True'` in  [tests/android_rpc_test.py](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/tests/android_rpc_test.py), by default on CPU target will execute.
+This will compile TVM IR to shared libraries (CPU, OpenCL and Vulkan) and run vector addition on your Android device. To verify compiled TVM IR shared libraries on OpenCL target set `'test_opencl = True'` and on Vulkan target set `'test_vulkan = True'` in  [tests/android_rpc_test.py](https://github.com/apache/incubator-tvm/blob/main/apps/android_rpc/tests/android_rpc_test.py), by default on CPU target will execute.
 On my test device, it gives following results.
 
 ```bash
diff --git a/apps/benchmark/README.md b/apps/benchmark/README.md
index ca96ff1..920033f 100644
--- a/apps/benchmark/README.md
+++ b/apps/benchmark/README.md
@@ -78,7 +78,7 @@ python3 -m tvm.exec.rpc_tracker
   `python3 -m tvm.exec.rpc_server --tracker=10.77.1.123:9190 --key=rk3399`, where 10.77.1.123 is the IP address of the tracker.
 
 * For Android device
-   * Build and install tvm RPC apk on your device [Help](https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc).
+   * Build and install tvm RPC apk on your device [Help](https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc).
      Make sure you can pass the android rpc test. Then you have alreadly known how to register.
 
 3. Verify the device registration
diff --git a/apps/ios_rpc/tests/ios_rpc_mobilenet.py b/apps/ios_rpc/tests/ios_rpc_mobilenet.py
index daac680..132377a 100644
--- a/apps/ios_rpc/tests/ios_rpc_mobilenet.py
+++ b/apps/ios_rpc/tests/ios_rpc_mobilenet.py
@@ -61,7 +61,7 @@ def compile_metal(src):
 
 
 def prepare_input():
-    img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+    img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
     img_name = "cat.png"
     synset_url = "".join(
         [
diff --git a/apps/wasm-standalone/README.md b/apps/wasm-standalone/README.md
index 1456000..e40d218 100644
--- a/apps/wasm-standalone/README.md
+++ b/apps/wasm-standalone/README.md
@@ -37,7 +37,7 @@
 
 ## Motivation
 
-<img src="https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png" alt="TVM hardware support" width="600"/>
+<img src="https://github.com/dmlc/web-data/raw/main/tvm/tutorial/tvm_support_list.png" alt="TVM hardware support" width="600"/>
 
 As demonstrated in TVM runtime [tutorials](https://tvm.apache.org/docs/tutorials/get_started/relay_quick_start.html), TVM already supports WASM as the optional hardware backend, so we can leverage the features of WebAssembly (portability, security) and TVM runtime (domain-specific, optimization) to build a flexible and auto-optimized graph compiler for all deep learning frameworks.
 
@@ -165,7 +165,7 @@ Options:
 Next perform model inference using these commands below:
 ```
 $ cp ../../../wasm-graph/lib/wasm_graph_resnet50.wasm ./
-$ wget -O cat.png https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true
+$ wget -O cat.png https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true
 $ wget -O synset.csv https://raw.githubusercontent.com/kazum/tvm-wasm/master/synset.csv
 $ ./target/debug/test_graph_resnet50 -g ./wasm_graph_resnet50.wasm -i ./cat.png -l ./synset.csv
 original image dimensions: (256, 256)
diff --git a/docker/lint.sh b/docker/lint.sh
index 913d349..d15ce71 100755
--- a/docker/lint.sh
+++ b/docker/lint.sh
@@ -45,7 +45,7 @@ function run_lint_step() {
                 # NOTE: need to run git status to update some docker-side cache. Otherwise,
                 # git-clang-format will fail with "The following files would be modified but have
                 # unstaged changes:"
-                cmd=( bash -c 'git status &>/dev/null && tests/lint/git-clang-format.sh -i origin/master' )
+                cmd=( bash -c 'git status &>/dev/null && tests/lint/git-clang-format.sh -i origin/main' )
             fi
             ;;
         cpplint)
@@ -58,7 +58,7 @@ function run_lint_step() {
             if [ $inplace_fix -eq 0 ]; then
                 cmd=( tests/lint/python_format.sh )
             else
-                cmd=( tests/lint/git-black.sh -i origin/master )
+                cmd=( tests/lint/git-black.sh -i origin/main )
             fi
             ;;
         jnilint)
diff --git a/docs/conf.py b/docs/conf.py
index 9322f5a..259d9c3 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -48,7 +48,7 @@ sys.path.insert(0, os.path.join(curr_path, "../vta/python"))
 project = "tvm"
 author = "Apache Software Foundation"
 copyright = "2020, %s" % author
-github_doc_root = "https://github.com/apache/incubator-tvm/tree/master/docs/"
+github_doc_root = "https://github.com/apache/incubator-tvm/tree/main/docs/"
 
 os.environ["TVM_BUILD_DOC"] = "1"
 # Version information.
diff --git a/docs/contribute/code_guide.rst b/docs/contribute/code_guide.rst
index d790ce6..dbcddf7 100644
--- a/docs/contribute/code_guide.rst
+++ b/docs/contribute/code_guide.rst
@@ -36,7 +36,7 @@ C++ Code Styles
 
 We use `clang-format` to enforce the code style. Because different version
 of clang-format might change by its version, it is recommended to use the same
-version of the clang-format as the master.
+version of the clang-format as the main one.
 You can also use the following command via docker.
 
 .. code:: bash
diff --git a/docs/contribute/community.rst b/docs/contribute/community.rst
index f6ea514..fd6df0f 100644
--- a/docs/contribute/community.rst
+++ b/docs/contribute/community.rst
@@ -20,7 +20,7 @@
 TVM Community Guideline
 =======================
 
-TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use, contribute to, and influence the direction of the project. See `CONTRIBUTORS.md <https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md>`_ for the current list of contributors.
+TVM adopts the Apache style model and governs by merit. We believe that it is important to create an inclusive community where everyone can use, contribute to, and influence the direction of the project. See `CONTRIBUTORS.md <https://github.com/apache/incubator-tvm/blob/main/CONTRIBUTORS.md>`_ for the current list of contributors.
 
 
 
diff --git a/docs/contribute/document.rst b/docs/contribute/document.rst
index a6f54dc..1bfab1e 100644
--- a/docs/contribute/document.rst
+++ b/docs/contribute/document.rst
@@ -68,7 +68,7 @@ Be careful to leave blank lines between sections of your documents.
 In the above case, there has to be a blank line before `Parameters`, `Returns` and `Examples`
 in order for the doc to be built correctly. To add a new function to the doc,
 we need to add the `sphinx.autodoc <http://www.sphinx-doc.org/en/master/ext/autodoc.html>`_
-rules to the `docs/api/python <https://github.com/apache/incubator-tvm/tree/master/docs/api/python>`_).
+rules to the `docs/api/python <https://github.com/apache/incubator-tvm/tree/main/docs/api/python>`_).
 You can refer to the existing files under this folder on how to add the functions.
 
 
@@ -96,7 +96,7 @@ to add comments about code logics to improve readability.
 Write Tutorials
 ---------------
 We use the `sphinx-gallery <https://sphinx-gallery.github.io/>`_ to build python tutorials.
-You can find the source code under `tutorials <https://github.com/apache/incubator-tvm/tree/master/tutorials>`_ quite self explanatory.
+You can find the source code under `tutorials <https://github.com/apache/incubator-tvm/tree/main/tutorials>`_ quite self explanatory.
 One thing that worth noting is that the comment blocks are written in reStructuredText instead of markdown so be aware of the syntax.
 
 The tutorial code will run on our build server to generate the document page.
diff --git a/docs/contribute/git_howto.rst b/docs/contribute/git_howto.rst
index 6bb0399..4585736 100644
--- a/docs/contribute/git_howto.rst
+++ b/docs/contribute/git_howto.rst
@@ -23,16 +23,16 @@ Git Usage Tips
 
 Here are some tips for git workflow.
 
-## How to resolve conflict with master
+## How to resolve conflict with main
 
-- First rebase to most recent master
+- First rebase to most recent main
 
 .. code:: bash
 
   # The first two steps can be skipped after you do it once.
   git remote add upstream [url to tvm repo]
   git fetch upstream
-  git rebase upstream/master
+  git rebase upstream/main
 
 
 - The git may show some conflicts it cannot merge, say `conflicted.py`.
@@ -84,16 +84,16 @@ to create a PR with set of meaningful commits. You can do it by following steps.
   git push --force
 
 
-Reset to the most recent master
--------------------------------
+Reset to the most recent main branch
+------------------------------------
 
-You can always use git reset to reset your version to the most recent master.
+You can always use git reset to reset your version to the most recent main.
 Note that all your ***local changes will get lost***.
 So only do it when you do not have local changes or when your pull request just get merged.
 
 .. code:: bash
 
-  git reset --hard [hash tag of master]
+  git reset --hard [hash tag of main]
 
 
 Recover a Previous Commit after Reset
@@ -110,12 +110,12 @@ Once you get the right hashtag, you can use git reset again to change
 the head to the right commit.
 
 
-Apply only k-Latest Commits on to the master
---------------------------------------------
+Apply only k-Latest Commits on to the main
+------------------------------------------
 
-Sometimes it is useful to only apply your k-latest changes on top of the master.
+Sometimes it is useful to only apply your k-latest changes on top of the main.
 This usually happens when you have other m-commits that are already merged
-before these k-commits. Directly rebase against the master might cause merge conflicts
+before these k-commits. Directly rebase against the main might cause merge conflicts
 on these first m-commits(which are can be safely discarded).
 
 You can instead use the following command
@@ -124,9 +124,9 @@ You can instead use the following command
 
   # k is the concrete number
   # Put HEAD~2 for the last 1 commit.
-  git rebase --onto upstream/master HEAD~k
+  git rebase --onto upstream/main HEAD~k
 
-You can then force push to the master. Note that the above command will discard
+You can then force push to the main. Note that the above command will discard
 all the commits before tha last k ones.
 
 
diff --git a/docs/contribute/pull_request.rst b/docs/contribute/pull_request.rst
index 935f2d5..e498c70 100644
--- a/docs/contribute/pull_request.rst
+++ b/docs/contribute/pull_request.rst
@@ -20,13 +20,13 @@ Submit a Pull Request
 
 This is a quick guide to submit a pull request, please also refer to the detailed guidelines.
 
-- Before submit, please rebase your code on the most recent version of master, you can do it by
+- Before submit, please rebase your code on the most recent version of main, you can do it by
 
   .. code:: bash
 
     git remote add upstream [url to tvm repo]
     git fetch upstream
-    git rebase upstream/master
+    git rebase upstream/main
 
 - Make sure code style check pass by typing the following command, and all the existing test-cases pass.
 
@@ -48,8 +48,8 @@ This is a quick guide to submit a pull request, please also refer to the detaile
 
   .. code:: bash
 
-    # Run clang-format check for all the files that changed since upstream/master
-    docker/bash.sh tvmai/ci-lint ./tests/lint/git-clang-format.sh upstream/master
+    # Run clang-format check for all the files that changed since upstream/main
+    docker/bash.sh tvmai/ci-lint ./tests/lint/git-clang-format.sh upstream/main
 
 - Add test-cases to cover the new features or bugfix the patch introduces.
 - Document the code you wrote, see more at :ref:`doc_guide`
diff --git a/docs/contribute/release_process.rst b/docs/contribute/release_process.rst
index 705b55c..0f1e515 100644
--- a/docs/contribute/release_process.rst
+++ b/docs/contribute/release_process.rst
@@ -59,7 +59,7 @@ After generating the gpg key, you need to upload your key to a public key server
 
 If you want to do the release on another machine, you can transfer your gpg key to that machine via the :code:`gpg --export` and :code:`gpg --import` commands.
 
-The last step is to update the KEYS file with your code signing key https://www.apache.org/dev/openpgp.html#export-public-key. Check in the changes to the TVM master branch, as well as ASF SVN,
+The last step is to update the KEYS file with your code signing key https://www.apache.org/dev/openpgp.html#export-public-key. Check in the changes to the TVM main branch, as well as ASF SVN,
 
 .. code-block:: bash
 
@@ -90,7 +90,7 @@ To cut a release candidate, one needs to first cut a branch using selected versi
 Go to the GitHub repositories "releases" tab and click "Draft a new release",
 
 - Provide the release tag in the form of “v1.0.0.rc0” where 0 means it’s the first release candidate
-- Select the commit by clicking Target: branch > Recent commits > $commit_hash 
+- Select the commit by clicking Target: branch > Recent commits > $commit_hash
 - Copy and paste release note draft into the description box
 - Select "This is a pre-release"
 - Click "Publish release"
@@ -115,7 +115,7 @@ Create source code artifacts,
 	rm -rf .DS_Store
 	find . -name ".git*" -print0 | xargs -0 rm -rf
 	cd ..
-	brew install gnu-tar 
+	brew install gnu-tar
 	gtar -czvf apache-tvm-src-v0.6.0.rc0-incubating.tar.gz apache-tvm-src-v0.6.0.rc0-incubating
 
 Use your GPG key to sign the created artifact. First make sure your GPG is set to use the correct private key,
@@ -147,7 +147,7 @@ The release manager also needs to upload the artifacts to ASF SVN,
 	cd svn-tvm
 	mkdir tvm-v0.6.0-rc0
 	# copy files into it
-	svn add tvm-0.6.0-rc0 
+	svn add tvm-0.6.0-rc0
 	svn ci --username $ASF_USERNAME --password "$ASF_PASSWORD" -m "Add RC"
 
 
diff --git a/docs/deploy/android.rst b/docs/deploy/android.rst
index c724eab..e28eef3 100644
--- a/docs/deploy/android.rst
+++ b/docs/deploy/android.rst
@@ -38,5 +38,5 @@ deploy_lib.so, deploy_graph.json, deploy_param.params will go to android target.
 TVM Runtime for Android Target
 ------------------------------
 
-Refer `here <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/README.md#build-and-installation>`_ to build CPU/OpenCL version flavor TVM runtime for android target.
-From android java TVM API to load model & execute can be referred at this `java <https://github.com/apache/incubator-tvm/blob/master/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java>`_ sample source.
+Refer `here <https://github.com/apache/incubator-tvm/blob/main/apps/android_deploy/README.md#build-and-installation>`_ to build CPU/OpenCL version flavor TVM runtime for android target.
+From android java TVM API to load model & execute can be referred at this `java <https://github.com/apache/incubator-tvm/blob/main/apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java>`_ sample source.
diff --git a/docs/deploy/cpp_deploy.rst b/docs/deploy/cpp_deploy.rst
index a298f95..f3de69d 100644
--- a/docs/deploy/cpp_deploy.rst
+++ b/docs/deploy/cpp_deploy.rst
@@ -19,7 +19,7 @@
 Deploy TVM Module using C++ API
 ===============================
 
-We provide an example on how to deploy TVM modules in `apps/howto_deploy <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy>`_
+We provide an example on how to deploy TVM modules in `apps/howto_deploy <https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy>`_
 
 To run the example, you can use the following command
 
@@ -38,17 +38,17 @@ TVM provides a minimum runtime, which costs around 300K to 600K depending on how
 In most cases, we can use ``libtvm_runtime.so`` that comes with the build.
 
 If somehow you find it is hard to build ``libtvm_runtime``, checkout
-`tvm_runtime_pack.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/tvm_runtime_pack.cc>`_.
+`tvm_runtime_pack.cc <https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy/tvm_runtime_pack.cc>`_.
 It is an example all in one file that gives you TVM runtime.
 You can compile this file using your build system and include this into your project.
 
-You can also checkout `apps <https://github.com/apache/incubator-tvm/tree/master/apps/>`_ for example applications build with TVM on iOS, Android and others.
+You can also checkout `apps <https://github.com/apache/incubator-tvm/tree/main/apps/>`_ for example applications build with TVM on iOS, Android and others.
 
 Dynamic Library vs. System Module
 ---------------------------------
 TVM provides two ways to use the compiled library.
-You can checkout `prepare_test_libs.py <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/prepare_test_libs.py>`_
-on how to generate the library and `cpp_deploy.cc <https://github.com/apache/incubator-tvm/tree/master/apps/howto_deploy/cpp_deploy.cc>`_ on how to use them.
+You can checkout `prepare_test_libs.py <https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy/prepare_test_libs.py>`_
+on how to generate the library and `cpp_deploy.cc <https://github.com/apache/incubator-tvm/tree/main/apps/howto_deploy/cpp_deploy.cc>`_ on how to use them.
 
 - Store library as a shared library and dynamically load the library into your project.
 - Bundle the compiled library into your project in system module mode.
diff --git a/docs/dev/frontend/tensorflow.rst b/docs/dev/frontend/tensorflow.rst
index bca0fc1..b234ed7 100644
--- a/docs/dev/frontend/tensorflow.rst
+++ b/docs/dev/frontend/tensorflow.rst
@@ -57,7 +57,7 @@ Export
 
 TensorFlow frontend expects a frozen protobuf (.pb) or saved model as input. It currently does not support checkpoint (.ckpt). The graphdef needed by the TensorFlow frontend can be extracted from the active session, or by using the `TFParser`_ helper class.
 
-.. _TFParser: https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/frontend/tensorflow_parser.py
+.. _TFParser: https://github.com/apache/incubator-tvm/blob/main/python/tvm/relay/frontend/tensorflow_parser.py
 
 The model should be exported with a number of transformations to prepare the model for inference. It is also important to set ```add_shapes=True```, as this will embed the output shapes of each node into the graph. Here is one function to export a model as a protobuf given a session:
 
@@ -101,7 +101,7 @@ Import the Model
 Explicit Shape:
 ~~~~~~~~~~~~~~~
 
-To ensure shapes can be known throughout the entire graph, pass the ```shape``` argument to ```from_tensorflow```. This dictionary maps input names to input shapes. Please refer to these `test cases <https://github.com/apache/incubator-tvm/blob/master/tests/python/frontend/tensorflow/test_forward.py#L36>`_ for examples.
+To ensure shapes can be known throughout the entire graph, pass the ```shape``` argument to ```from_tensorflow```. This dictionary maps input names to input shapes. Please refer to these `test cases <https://github.com/apache/incubator-tvm/blob/main/tests/python/frontend/tensorflow/test_forward.py#L36>`_ for examples.
 
 Data Layout
 ~~~~~~~~~~~
diff --git a/docs/dev/index.rst b/docs/dev/index.rst
index 2e577df..d70b90a 100644
--- a/docs/dev/index.rst
+++ b/docs/dev/index.rst
@@ -49,7 +49,7 @@ In this guide, we will study an example compilation flow in the compiler. The fi
 - Runtime Execution: the user loads back a `runtime.Module` and runs the compiled functions in the supported runtime environment.
 
 
-.. figure:: https://raw.githubusercontent.com/tvmai/web-data/master/images/design/tvm_dyn_workflow.svg
+.. figure:: https://raw.githubusercontent.com/tvmai/web-data/main/images/design/tvm_dyn_workflow.svg
    :align: center
    :width: 85%
 
@@ -201,7 +201,7 @@ except that the data structure of interest changes from the numpy.ndarray to tvm
 Logical Architecture Components
 -------------------------------
 
-.. figure:: https://raw.githubusercontent.com/tvmai/web-data/master/images/design/tvm_static_overview.svg
+.. figure:: https://raw.githubusercontent.com/tvmai/web-data/main/images/design/tvm_static_overview.svg
    :align: center
    :width: 85%
 
diff --git a/docs/dev/inferbound.rst b/docs/dev/inferbound.rst
index 6956600..7d0127a 100644
--- a/docs/dev/inferbound.rst
+++ b/docs/dev/inferbound.rst
@@ -22,7 +22,7 @@ InferBound Pass
 *******************************************
 
 
-The InferBound pass is run after normalize, and before ScheduleOps `build_module.py <https://github.com/apache/incubator-tvm/blob/master/python/tvm/driver/build_module.py>`_. The main job of InferBound is to create the bounds map, which specifies a Range for each IterVar in the program. These bounds are then passed to ScheduleOps, where they are used to set the extents of For loops, see `MakeLoopNest <https://github.com/apache/incubator-tvm/blob/master/src/te/operation/op_util.cc>`_, and [...]
+The InferBound pass is run after normalize, and before ScheduleOps `build_module.py <https://github.com/apache/incubator-tvm/blob/main/python/tvm/driver/build_module.py>`_. The main job of InferBound is to create the bounds map, which specifies a Range for each IterVar in the program. These bounds are then passed to ScheduleOps, where they are used to set the extents of For loops, see `MakeLoopNest <https://github.com/apache/incubator-tvm/blob/main/src/te/operation/op_util.cc>`_, and to  [...]
 
 The output of InferBound is a map from IterVar to Range:
 
@@ -53,9 +53,9 @@ Therefore, let's review the Range and IterVar classes:
    	};
    }
 
-Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may not have a meaningful value, depending on when the IterVar was created. For example, when ``tvm.compute`` is called, an `IterVar is created <https://github.com/apache/incubator-tvm/blob/master/src/te/operation/compute_op.cc>`_ for each axis and reduce axis, with dom's equal to the shape supplied in the call to ``tvm.compute``.
+Note that IterVarNode also contains a Range ``dom``. This ``dom`` may or may not have a meaningful value, depending on when the IterVar was created. For example, when ``tvm.compute`` is called, an `IterVar is created <https://github.com/apache/incubator-tvm/blob/main/src/te/operation/compute_op.cc>`_ for each axis and reduce axis, with dom's equal to the shape supplied in the call to ``tvm.compute``.
 
-On the other hand, when ``tvm.split`` is called, `IterVars are created <https://github.com/apache/incubator-tvm/blob/master/src/te/schedule/schedule_lang.cc>`_ for the inner and outer axes, but these IterVars are not given a meaningful ``dom`` value.
+On the other hand, when ``tvm.split`` is called, `IterVars are created <https://github.com/apache/incubator-tvm/blob/main/src/te/schedule/schedule_lang.cc>`_ for the inner and outer axes, but these IterVars are not given a meaningful ``dom`` value.
 
 In any case, the ``dom`` member of an IterVar is never modified during InferBound. However, keep in mind that the ``dom`` member of an IterVar is sometimes used as default value for the Ranges InferBound computes.
 
@@ -117,14 +117,14 @@ Tensors haven't been mentioned yet, but in the context of TVM, a Tensor represen
    	int value_index;
    };
 
-In the Operation class declaration above, we can see that each operation also has a list of InputTensors. Thus the stages of the schedule form a DAG, where each stage is a node in the graph. There is an edge in the graph from Stage A to Stage B, if the operation of Stage B has an input tensor whose source operation is the op of Stage A. Put simply, there is an edge from A to B, if B consumes a tensor produced by A. See the diagram below. This graph is created at the beginning of InferBou [...]
+In the Operation class declaration above, we can see that each operation also has a list of InputTensors. Thus the stages of the schedule form a DAG, where each stage is a node in the graph. There is an edge in the graph from Stage A to Stage B, if the operation of Stage B has an input tensor whose source operation is the op of Stage A. Put simply, there is an edge from A to B, if B consumes a tensor produced by A. See the diagram below. This graph is created at the beginning of InferBou [...]
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/stage_graph.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/stage_graph.png
     :align: center
 
 InferBound makes one pass through the graph, visiting each stage exactly once. InferBound starts from the output stages (i.e., the solid blue nodes in the graph above), and moves upwards (in the opposite direction of the edges). This is achieved by performing a reverse topological sort on the nodes of the graph. Therefore, when InferBound visits a stage, each of its consumer stages has already been visited.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/inferbound_traversal.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/inferbound_traversal.png
     :align: center
 
 The InferBound pass is shown in the following pseudo-code:
@@ -161,7 +161,7 @@ The InferBound pass traverses the stage graph, as described above. However, with
 
 Recall that all IterVars of the stage are related by IterVarRelations. The IterVarRelations of a stage form a directed acyclic hyper-graph, where each node of the graph corresponds to an IterVar, and each hyper-edge corresponds to an IterVarRelation. We can also represent this hyper-graph as a DAG, which is simpler to visualize as shown below.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/relations.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/relations.png
     :align: center
 
 
@@ -206,7 +206,7 @@ This process can seem complicated. One reason is that a stage can have more than
 
 As mentioned above, a consumer may only require a small number of elements from each tensor. The consumers can be thought of as making requests to the stage, for certain regions of its output tensors. The job of Phases 1-3 is to establish the regions of each output tensor that are required by each consumer.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/inferbound_phases.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/inferbound_phases.png
     :align: center
 
 IntSets
@@ -320,13 +320,13 @@ A ComputeOp has only a single output Tensor, whose axes correspond to the axis v
    // i is the dimension
    rmap[axis[i]] = arith::Union(tmap[output][i]).cover_range(axis[i]->dom);
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/gatherbound.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/gatherbound.png
     :align: center
 
 
 The union of IntSets is computed by converting each IntSet to an Interval, and then taking the minimum of all minimums, and the maximum of all of these interval's maximums.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/union.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/union.png
     :align: center
 
 
@@ -335,7 +335,7 @@ This clearly results in some unnecessary computation, i.e., tensor elements will
 Unfortunately, even if we're lucky and the IntervalSet unions do not produce unnecessary computation, the fact that GatherBound considers each dimension of the tensor separately can also cause unnecessary computation. For example, in the diagram below the two consumers A and B require disjoint regions of the 2D tensor: consumer A requires T[0:2, 0:2], and consumer B requires T[2:4, 2:4]. GatherBound operates on each dimension of the tensor separately. For the first dimension of the tenso [...]
 
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/gatherbound_problem.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/gatherbound_problem.png
     :align: center
 
 .. _InferBoundCA:
@@ -691,7 +691,7 @@ Determining the amount of B that must be computed is the responsibility of Infer
 When InferRootBound is working on stage B, it visits B's consumer stage C to find out how much of B is requested by C. C has root_iter_vars ci and cj, which have been fused and then split. This results in the following :ref:`IterVarHyperGraph` for stage C.
 
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/passupdomain_problem.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/passupdomain_problem.png
     :align: center
 
 
@@ -750,16 +750,16 @@ This example shows that schedules containing a split of fused axes are difficult
 
 If the split factor is 4, or 8, in the above example, the region of B needed in each iteration of the outer loop is rectangular.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/passupdomain_div.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/passupdomain_div.png
     :align: center
 
 However, if the split factor is changed from 4 to 3 in the example above, it is easy to see that the region of B that C needs can no longer be described by an independent Range for each of its axes.
 
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/passupdomain_nodiv.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/passupdomain_nodiv.png
     :align: center
 
 The best that can be done with rectangular regions is shown in the following diagram. The orange regions are the minimum rectangular regions covering the region of B that needs to be computed, at each iteration of the outer loop.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/docs/inferbound/passupdomain_min.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/docs/inferbound/passupdomain_min.png
     :align: center
diff --git a/docs/dev/pass_infra.rst b/docs/dev/pass_infra.rst
index 6fd150d..1427608 100644
--- a/docs/dev/pass_infra.rst
+++ b/docs/dev/pass_infra.rst
@@ -196,7 +196,7 @@ optimizations (IPO), which are similar to the module pass used in LLVM. Some
 typical passes in Relay that need the global picture of a module, such as
 A-normal form conversion and lambda lifting, etc., fall into this set. At this
 level, users can even add and/or delete functions in a module. Note that all
-passes 
+passes
 
 .. code:: c++
 
@@ -530,20 +530,20 @@ optimization pipeline and debug Relay and tir passes, please refer to the
 
 .. _Block: https://mxnet.incubator.apache.org/api/python/docs/api/gluon/block.html#gluon-block
 
-.. _include/tvm/ir/transform.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/ir/transform.h
+.. _include/tvm/ir/transform.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/ir/transform.h
 
-.. _src/relay/ir/transform.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/ir/transform.cc
+.. _src/relay/ir/transform.cc: https://github.com/apache/incubator-tvm/blob/main/src/relay/ir/transform.cc
 
-.. _src/ir/transform.cc: https://github.com/apache/incubator-tvm/blob/master/src/ir/transform.cc
+.. _src/ir/transform.cc: https://github.com/apache/incubator-tvm/blob/main/src/ir/transform.cc
 
-.. _src/relay/pass/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/pass/fold_constant.cc
+.. _src/relay/pass/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/main/src/relay/pass/fold_constant.cc
 
-.. _python/tvm/relay/transform.py: https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/transform.py
+.. _python/tvm/relay/transform.py: https://github.com/apache/incubator-tvm/blob/main/python/tvm/relay/transform.py
 
-.. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/relay/transform.h
+.. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/relay/transform.h
 
-.. _python/tvm/ir/transform.py: https://github.com/apache/incubator-tvm/blob/master/python/tvm/ir/transform.py
+.. _python/tvm/ir/transform.py: https://github.com/apache/incubator-tvm/blob/main/python/tvm/ir/transform.py
 
-.. _src/tir/transforms/unroll_loop.cc: https://github.com/apache/incubator-tvm/blob/master/src/tir/transforms/unroll_loop.cc
+.. _src/tir/transforms/unroll_loop.cc: https://github.com/apache/incubator-tvm/blob/main/src/tir/transforms/unroll_loop.cc
 
-.. _use pass infra: https://github.com/apache/incubator-tvm/blob/master/tutorials/dev/use_pass_infra.py
+.. _use pass infra: https://github.com/apache/incubator-tvm/blob/main/tutorials/dev/use_pass_infra.py
diff --git a/docs/dev/relay_add_pass.rst b/docs/dev/relay_add_pass.rst
index e1a5e7e..02c0ba2 100644
--- a/docs/dev/relay_add_pass.rst
+++ b/docs/dev/relay_add_pass.rst
@@ -399,8 +399,8 @@ information about the pass manager interface can be found in :ref:`pass-infra`.
 Relay's standard passes are listed in `include/tvm/relay/transform.h`_ and implemented
 in `src/relay/pass/`_.
 
-.. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/relay/transform.h
+.. _include/tvm/relay/transform.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/relay/transform.h
 
-.. _src/relay/pass/: https://github.com/apache/incubator-tvm/tree/master/src/relay/pass
+.. _src/relay/pass/: https://github.com/apache/incubator-tvm/tree/main/src/relay/pass
 
-.. _src/relay/transforms/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/transforms/fold_constant.cc
+.. _src/relay/transforms/fold_constant.cc: https://github.com/apache/incubator-tvm/blob/main/src/relay/transforms/fold_constant.cc
diff --git a/docs/dev/relay_bring_your_own_codegen.rst b/docs/dev/relay_bring_your_own_codegen.rst
index 3dc56ce..f4ee58a 100644
--- a/docs/dev/relay_bring_your_own_codegen.rst
+++ b/docs/dev/relay_bring_your_own_codegen.rst
@@ -137,7 +137,7 @@ Here we highlight the notes marked in the above code:
 
 * **Note 3** is a TVM runtime compatible wrapper function. It accepts a list of input tensors and one output tensor (the last argument), casts them to the right data type, and invokes the subgraph function described in Note 2. In addition, ``TVM_DLL_EXPORT_TYPED_FUNC`` is a TVM macro that generates another function ``gcc_0`` with unified the function arguments by packing all tensors to ``TVMArgs``. As a result, the TVM runtime can directly invoke ``gcc_0`` to execute the subgraph without [...]
 
-In the rest of this section, we will implement a codegen step-by-step to generate the above code. Your own codegen has to be located at ``src/relay/backend/contrib/<your-codegen-name>/``. In our example, we name our codegen "codegen_c" and put it under `/src/relay/backend/contrib/codegen_c/ <https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/codegen_c/codegen.cc>`_. Feel free to check this file for a complete implementation.
+In the rest of this section, we will implement a codegen step-by-step to generate the above code. Your own codegen has to be located at ``src/relay/backend/contrib/<your-codegen-name>/``. In our example, we name our codegen "codegen_c" and put it under `/src/relay/backend/contrib/codegen_c/ <https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/contrib/codegen_c/codegen.cc>`_. Feel free to check this file for a complete implementation.
 
 Specifically, we are going to implement two classes in this file and here is their relationship:
 
diff --git a/docs/dev/relay_intro.rst b/docs/dev/relay_intro.rst
index fac4479..87f68fc 100644
--- a/docs/dev/relay_intro.rst
+++ b/docs/dev/relay_intro.rst
@@ -37,7 +37,7 @@ Though dataflow graphs are limited in terms of the computations they are capable
 lacking control flow, their simplicity makes it easier to implement automatic differentiation and
 compile for heterogeneous execution environments (e.g., executing parts of the graph on specialized hardware).
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/relay/dataflow.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/relay/dataflow.png
     :align: center
 
 
@@ -127,7 +127,7 @@ it to the var, then return the evaluated result in the body expression.
 You can use a sequence of let bindings to construct a logically equivalent program to a dataflow program.
 The code example below shows one program with two forms side by side.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/relay/dataflow_vs_func.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/relay/dataflow_vs_func.png
     :align: center
 
 
@@ -151,7 +151,7 @@ Why We Might Need Let Binding
 One key usage of let binding is that it specifies the scope of computation. Let us take a look at the following example,
 which does not use let bindings.
 
-.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/master/images/relay/let_scope.png
+.. image:: https://raw.githubusercontent.com/tvmai/tvmai.github.io/main/images/relay/let_scope.png
     :align: center
 
 The problem comes when we try to decide where we should evaluate node ``%1``. In particular, while the text format seems
diff --git a/docs/dev/runtime.rst b/docs/dev/runtime.rst
index 7a001fa..91b19ee 100644
--- a/docs/dev/runtime.rst
+++ b/docs/dev/runtime.rst
@@ -45,7 +45,7 @@ PackedFunc
 `PackedFunc`_ is a simple but elegant solution
 we find to solve the challenges listed. The following code block provides an example in C++
 
-.. _PackedFunc: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/packed_func.h
+.. _PackedFunc: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/packed_func.h
 
 .. code:: c
 
@@ -131,9 +131,9 @@ which allows us to embed the PackedFunc into any languages. Besides python, so f
 `java`_ and `javascript`_.
 This philosophy of embedded API is very like Lua, except that we don't have a new language but use C++.
 
-.. _minimum C API: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/c_runtime_api.h
-.. _java: https://github.com/apache/incubator-tvm/tree/master/jvm
-.. _javascript: https://github.com/apache/incubator-tvm/tree/master/web
+.. _minimum C API: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/c_runtime_api.h
+.. _java: https://github.com/apache/incubator-tvm/tree/main/jvm
+.. _javascript: https://github.com/apache/incubator-tvm/tree/main/web
 
 
 One fun fact about PackedFunc is that we use it for both compiler and deployment stack.
@@ -141,7 +141,7 @@ One fun fact about PackedFunc is that we use it for both compiler and deployment
 - All TVM's compiler pass functions are exposed to frontend as PackedFunc, see `here`_
 - The compiled module also returns the compiled function as PackedFunc
 
-.. _here: https://github.com/apache/incubator-tvm/tree/master/src/api
+.. _here: https://github.com/apache/incubator-tvm/tree/main/src/api
 
 To keep the runtime minimum, we isolated the IR Object support from the deployment runtime. The resulting runtime takes around 200K - 600K depending on how many runtime driver modules (e.g., CUDA) get included.
 
@@ -162,7 +162,7 @@ TVM defines the compiled object as `Module`_.
 The user can get the compiled function from Module as PackedFunc.
 The generated compiled code can dynamically get function from Module in runtime. It caches the function handle in the first call and reuses in subsequent calls. We use this to link device code and callback into any PackedFunc(e.g., python) from generated code.
 
-.. _Module: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/module.h
+.. _Module: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/module.h
 
 The ModuleNode is an abstract class that can be implemented by each type of device.
 So far we support modules for CUDA, Metal, OpenCL and loading dynamic shared libraries. This abstraction makes introduction
@@ -198,7 +198,7 @@ All the language object in the compiler stack is a subclass of ``Object``. Each
 the type of object. We choose string instead of int as type key so new ``Object`` class can be added in the decentralized fashion without
 adding the code back to the central repo. To ease the speed of dispatching, we allocate an integer type_index at runtime for each type_key.
 
-.. _Object: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/object.h
+.. _Object: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/object.h
 
 Since usually one ``Object`` could be referenced in multiple places in the language, we use a shared_ptr to keep
 track of reference. We use ``ObjectRef`` class to represent a reference to the ``Object``.
@@ -279,17 +279,17 @@ Each argument in PackedFunc contains a union value `TVMValue`_
 and a type code. This design allows the dynamically typed language to convert to the corresponding type directly, and statically typed language to
 do runtime type checking during conversion.
 
-.. _TVMValue: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/c_runtime_api.h#L122
+.. _TVMValue: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/c_runtime_api.h#L122
 
 The relevant files are
 
 - `packed_func.h`_ for C++ API
 - `c_runtime_api.cc`_ for C API and how to provide callback.
 
-.. _packed_func.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/packed_func.h
-.. _c_runtime_api.cc: https://github.com/apache/incubator-tvm/blob/master/src/runtime/c_runtime_api.cc#L262
+.. _packed_func.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/packed_func.h
+.. _c_runtime_api.cc: https://github.com/apache/incubator-tvm/blob/main/src/runtime/c_runtime_api.cc#L262
 
 To support extension types, we used a registry system to register type related information, like support of any
 in C++, see `Extension types`_ for more details.
 
-.. _Extension types: https://github.com/apache/incubator-tvm/tree/master/apps/extension
+.. _Extension types: https://github.com/apache/incubator-tvm/tree/main/apps/extension
diff --git a/docs/dev/virtual_machine.rst b/docs/dev/virtual_machine.rst
index ae6cac2..0986328 100644
--- a/docs/dev/virtual_machine.rst
+++ b/docs/dev/virtual_machine.rst
@@ -278,11 +278,11 @@ to represent tensor, tuple/list, and closure data, respectively. More details
 for each of them can be found at `include/tvm/runtime/ndarray.h`_,
 `include/tvm/runtime/vm/vm.h`_, and `include/tvm/runtime/container.h`_, respectively.
 
-.. _include/tvm/runtime/ndarray.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/ndarray.h
+.. _include/tvm/runtime/ndarray.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/ndarray.h
 
-.. _include/tvm/runtime/vm/vm.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/vm/vm.h
+.. _include/tvm/runtime/vm/vm.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/vm/vm.h
 
-.. _include/tvm/runtime/container.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/container.h
+.. _include/tvm/runtime/container.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/container.h
 
 Stack and State
 ~~~~~~~~~~~~~~~
@@ -326,7 +326,7 @@ The functions contain metadata about the function as well as its compiled byteco
 object then can be loaded and run by a ``tvm::relay::vm::VirtualMachine`` object. For full definitions of the
 data structures, please see `include/tvm/runtime/vm/executable.h`_ and `include/tvm/runtime/vm/vm.h`_.
 
-.. _include/tvm/runtime/vm/executable.h: https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/vm/executable.h
+.. _include/tvm/runtime/vm/executable.h: https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/vm/executable.h
 
 Optimizations
 ~~~~~~~~~~~~~
@@ -343,11 +343,11 @@ Optimizations marked with `TODO` are not implemented yet.
 - Tail Call Optimization (TODO)
 - Liveness Analysis (TODO)
 
-.. _src/relay/vm/lambda_lift.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/vm/lambda_lift.cc
+.. _src/relay/vm/lambda_lift.cc: https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/vm/lambda_lift.cc
 
-.. _src/relay/vm/inline_primitives.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/vm/inline_primitives.cc
+.. _src/relay/vm/inline_primitives.cc: https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/vm/inline_primitives.cc
 
-.. _src/relay/backend/vm/compiler.cc: https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/vm/compiler.cc
+.. _src/relay/backend/vm/compiler.cc: https://github.com/apache/incubator-tvm/blob/main/src/relay/backend/vm/compiler.cc
 
 Serialization
 ~~~~~~~~~~~~~
@@ -386,7 +386,7 @@ load the serialized kernel binary and executable related binary code, which will
 instantiate a VM object. Please refer to the `test_vm_serialization.py`_ file for more
 examples.
 
-.. _test_vm_serialization.py: https://github.com/apache/incubator-tvm/blob/master/tests/python/relay/test_vm_serialization.py
+.. _test_vm_serialization.py: https://github.com/apache/incubator-tvm/blob/main/tests/python/relay/test_vm_serialization.py
 
 Unresolved Questions
 ~~~~~~~~~~~~~~~~~~~~
@@ -406,4 +406,4 @@ How do we support heterogenous execution?
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Heterogenous execution should work out of the box assuming we have annotated the appropriate device copies.
-In order to do this properly we need to run the device annotation and copying passes. 
+In order to do this properly we need to run the device annotation and copying passes.
diff --git a/docs/install/docker.rst b/docs/install/docker.rst
index b77e122..243e438 100644
--- a/docs/install/docker.rst
+++ b/docs/install/docker.rst
@@ -67,7 +67,7 @@ with ``localhost`` when pasting it into browser.
 
 Docker Source
 -------------
-Check out `The docker source <https://github.com/apache/incubator-tvm/tree/master/docker>`_ if you are interested in
+Check out `The docker source <https://github.com/apache/incubator-tvm/tree/main/docker>`_ if you are interested in
 building your own docker images.
 
 
diff --git a/docs/langref/relay_pattern.rst b/docs/langref/relay_pattern.rst
index 6cacff2..17282e1 100644
--- a/docs/langref/relay_pattern.rst
+++ b/docs/langref/relay_pattern.rst
@@ -20,9 +20,9 @@
 Pattern Matching in Relay
 =========================
 
-There are many places in TVM where we identify pure data-flow sub-graphs of the Relay program and attempt to transform them in some way example passes include fusion, quantization, external code generation, and device specific optimizations such as bitpacking, and layer slicing used by VTA. 
+There are many places in TVM where we identify pure data-flow sub-graphs of the Relay program and attempt to transform them in some way example passes include fusion, quantization, external code generation, and device specific optimizations such as bitpacking, and layer slicing used by VTA.
 
-Many of these passes today require a lots of boring boilerplate code in order to implement as well as requiring users to think in terms of visitors and AST matching. Many of these transformations can easily be described in terms of graph rewrites. In order to build a rewriter or other advanced machinery we first need a language of patterns to describe what we can match. 
+Many of these passes today require a lots of boring boilerplate code in order to implement as well as requiring users to think in terms of visitors and AST matching. Many of these transformations can easily be described in terms of graph rewrites. In order to build a rewriter or other advanced machinery we first need a language of patterns to describe what we can match.
 
 Such a language is not just useful for building a rewriter but also providing extension points for existing passes. For example the fusion pass could be parameterized by a set of fusion patterns which describes the capability of your hardware, and the quantization pass could take a set of patterns which describe which operators can be quantized on a given platform.
 
@@ -35,7 +35,7 @@ There are quite a few properties of operators that are worth matching. Below we
 demonstrates how to write patterns. It is recommended to check `tests/python/relay/test_dataflow_pattern.py`_
 for more use cases.
 
-.. _tests/python/relay/test_dataflow_pattern.py: https://github.com/apache/incubator-tvm/blob/master/tests/python/relay/test_dataflow_pattern.py
+.. _tests/python/relay/test_dataflow_pattern.py: https://github.com/apache/incubator-tvm/blob/main/tests/python/relay/test_dataflow_pattern.py
 
 .. note::
 
@@ -200,7 +200,7 @@ use ``is_expr``. This could be useful for algebraic simplify.
     def test_match_plus_zero():
         zero = (is_expr(relay.const(0)) | is_expr(relay.const(0.0)))
         pattern = wildcard() + zero
-        
+
         x = relay.Var('x')
         y = x + relay.const(0)
         assert pattern.match(y)
@@ -356,7 +356,7 @@ with a single batch_norm op:
             self.beta = wildcard()
             self.gamma = wildcard()
             self.eps = wildcard()
-            
+
             self.pattern = self.gamma * (self.x - self.mean)/is_op("sqrt")(self.var + self.eps) + self.beta
 
         def callback(self, pre, post, node_map):
diff --git a/docs/vta/dev/hardware.rst b/docs/vta/dev/hardware.rst
index 4d06826..c8d5433 100644
--- a/docs/vta/dev/hardware.rst
+++ b/docs/vta/dev/hardware.rst
@@ -36,7 +36,7 @@ In addition the design adopts decoupled access-execute to hide memory access lat
 
 To a broader extent, VTA can serve as a template deep learning accelerator design for full stack optimization, exposing a generic tensor computation interface to the compiler stack.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/blogpost/vta_overview.png
+.. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/blogpost/vta_overview.png
    :align: center
    :width: 80%
 
@@ -175,7 +175,7 @@ Finally, the ``STORE`` instructions are executed by the store module exclusively
 The fields of each instruction is described in the figure below.
 The meaning of each field will be further explained in the :ref:`vta-uarch` section.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/vta_instructions.png
+.. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/developer/vta_instructions.png
    :align: center
    :width: 100%
 
@@ -191,7 +191,7 @@ VTA relies on dependence FIFO queues between hardware modules to synchronize the
 The figure below shows how a given hardware module can execute concurrently from its producer and consumer modules in a dataflow fashion through the use of dependence FIFO queues, and single-reader/single-writer SRAM buffers.
 Each module is connected to its consumer and producer via read-after-write (RAW) and write-after-read (WAR) dependence queues.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/dataflow.png
+.. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/developer/dataflow.png
    :align: center
    :width: 100%
 
@@ -258,7 +258,7 @@ There are two types of compute micro-ops: ALU and GEMM operations.
 To minimize the footprint of micro-op kernels, while avoiding the need for control-flow instructions such as conditional jumps, the compute module executes micro-op sequences inside a two-level nested loop that computes the location of each tensor register location via an affine function.
 This compression approach helps reduce the micro-kernel instruction footprint, and applies to both matrix multiplication and 2D convolution, commonly found in neural network operators.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/gemm_core.png
+.. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/developer/gemm_core.png
    :align: center
    :width: 100%
 
@@ -269,7 +269,7 @@ This tensorization intrinsic is defined by the dimensions of the input, weight a
 Each data type can have a different integer precision: typically both weight and input types are low-precision (8-bits or less), while the accumulator tensor has a wider type to prevent overflows (32-bits).
 In order to keep the GEMM core busy, each of the input buffer, weight buffer, and register file have to expose sufficient read/write bandwidth.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/alu_core.png
+.. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/developer/alu_core.png
    :align: center
    :width: 100%
 
@@ -289,7 +289,7 @@ The micro-code in the context of tensor ALU computation only takes care of speci
 Load and Store Modules
 ~~~~~~~~~~~~~~~~~~~~~~
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/2d_dma.png
+.. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/developer/2d_dma.png
    :align: center
    :width: 100%
 
diff --git a/docs/vta/dev/index.rst b/docs/vta/dev/index.rst
index 0ba3bd1..d95f6e2 100644
--- a/docs/vta/dev/index.rst
+++ b/docs/vta/dev/index.rst
@@ -20,7 +20,7 @@ VTA Design and Developer Guide
 
 This developer guide details the complete VTA-TVM hardware-software stack.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/blogpost/vta_stack.png
+.. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/blogpost/vta_stack.png
    :align: center
    :width: 60%
 
diff --git a/docs/vta/index.rst b/docs/vta/index.rst
index 357c061..d97705e 100644
--- a/docs/vta/index.rst
+++ b/docs/vta/index.rst
@@ -22,7 +22,7 @@ VTA: Deep Learning Accelerator Stack
 
 The Versatile Tensor Accelerator (VTA) is an open, generic, and customizable deep learning accelerator with a complete TVM-based compiler stack. We designed VTA to expose the most salient and common characteristics of mainstream deep learning accelerators. Together TVM and VTA form an end-to-end hardware-software deep learning system stack that includes hardware design, drivers, a JIT runtime, and an optimizing compiler stack based on TVM.
 
-.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/blogpost/vta_overview.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/main/vta/blogpost/vta_overview.png
    :align: center
    :width: 60%
 
diff --git a/docs/vta/install.rst b/docs/vta/install.rst
index e47f84d..4cd1ee9 100644
--- a/docs/vta/install.rst
+++ b/docs/vta/install.rst
@@ -466,7 +466,7 @@ This would add quartus binary path into your ``PATH`` environment variable, so y
 Chisel-based Custom VTA Bitstream Compilation for DE10-Nano
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-Similar to the HLS-based design, high-level hardware parameters in Chisel-based design are listed in the VTA configuration file `Configs.scala <https://github.com/apache/incubator-tvm/blob/master/3rdparty/vta-hw/hardware/chisel/src/main/scala/core/Configs.scala>`_, and they can be customized by the user.
+Similar to the HLS-based design, high-level hardware parameters in Chisel-based design are listed in the VTA configuration file `Configs.scala <https://github.com/apache/incubator-tvm/blob/main/3rdparty/vta-hw/hardware/chisel/src/main/scala/core/Configs.scala>`_, and they can be customized by the user.
 
 For Intel FPGA, bitstream generation is driven by a top-level ``Makefile`` under ``<tvm root>/3rdparty/vta-hw/hardware/intel``.
 
diff --git a/jvm/README.md b/jvm/README.md
index b891adc..348e941 100644
--- a/jvm/README.md
+++ b/jvm/README.md
@@ -176,4 +176,4 @@ Server server = new Server(proxyHost, proxyPort, "key");
 server.start();
 ```
 
-You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` to build your own RPC server. Refer to [Android RPC Server](https://github.com/apache/incubator-tvm/blob/master/apps/android_rpc/app/src/main/java/org/apache/tvm/tvmrpc/RPCProcessor.java) for more details.
+You can also use `StandaloneServerProcessor` and `ConnectProxyServerProcessor` to build your own RPC server. Refer to [Android RPC Server](https://github.com/apache/incubator-tvm/blob/main/apps/android_rpc/app/src/main/java/org/apache/tvm/tvmrpc/RPCProcessor.java) for more details.
diff --git a/jvm/pom.xml b/jvm/pom.xml
index b563dd1..886f0e6 100644
--- a/jvm/pom.xml
+++ b/jvm/pom.xml
@@ -7,7 +7,7 @@
   <artifactId>tvm4j-parent</artifactId>
   <version>0.0.1-SNAPSHOT</version>
   <name>TVM4J Package - Parent</name>
-  <url>https://github.com/apache/incubator-tvm/tree/master/jvm</url>
+  <url>https://github.com/apache/incubator-tvm/tree/main/jvm</url>
   <description>TVM4J Package</description>
   <organization>
     <name>Apache Software Foundation</name>
diff --git a/python/tvm/contrib/hexagon.py b/python/tvm/contrib/hexagon.py
index ed938ba..34b3753 100644
--- a/python/tvm/contrib/hexagon.py
+++ b/python/tvm/contrib/hexagon.py
@@ -40,7 +40,7 @@ from .._ffi.registry import register_func
 # Subsequent calls to 'link_shared' will use the newly registered linker.
 
 hexagon_toolchain_root = os.environ.get("HEXAGON_TOOLCHAIN") or ""  # pylint: disable=invalid-name
-hexagon_link_master = os.path.join(  # pylint: disable=invalid-name
+hexagon_link_main = os.path.join(  # pylint: disable=invalid-name
     hexagon_toolchain_root, "bin", "hexagon-link"
 )
 
@@ -53,7 +53,7 @@ def register_linker(f):
 @register_func("tvm.contrib.hexagon.hexagon_link")
 def hexagon_link():
     """Return path to the Hexagon linker."""
-    return hexagon_link_master
+    return hexagon_link_main
 
 
 @register_func("tvm.contrib.hexagon.link_shared")
diff --git a/python/tvm/relay/testing/dcgan.py b/python/tvm/relay/testing/dcgan.py
index 04429ae..fc531b7 100644
--- a/python/tvm/relay/testing/dcgan.py
+++ b/python/tvm/relay/testing/dcgan.py
@@ -19,7 +19,7 @@
 Net of the generator of DCGAN
 
 Adopted from:
-https://github.com/tqchen/mxnet-gan/blob/master/mxgan/generator.py
+https://github.com/tqchen/mxnet-gan/blob/main/mxgan/generator.py
 
 Reference:
 Radford, Alec, Luke Metz, and Soumith Chintala.
diff --git a/python/tvm/relay/testing/tf.py b/python/tvm/relay/testing/tf.py
index 38bb30f..b0b1577 100644
--- a/python/tvm/relay/testing/tf.py
+++ b/python/tvm/relay/testing/tf.py
@@ -223,7 +223,7 @@ def get_workload(model_path, model_sub_path=None, inputs_dict=None, output=None)
     if model_sub_path:
         path_model = get_workload_official(model_path, model_sub_path)
     else:
-        repo_base = "https://github.com/dmlc/web-data/raw/master/tensorflow/models/"
+        repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/"
         model_url = os.path.join(repo_base, model_path)
         path_model = download_testdata(model_url, model_path, module="tf")
 
diff --git a/python/tvm/rpc/server.py b/python/tvm/rpc/server.py
index b25ed46..7287234 100644
--- a/python/tvm/rpc/server.py
+++ b/python/tvm/rpc/server.py
@@ -131,7 +131,7 @@ def _parse_server_opt(opts):
 
 
 def _listen_loop(sock, port, rpc_key, tracker_addr, load_library, custom_addr):
-    """Listening loop of the server master."""
+    """Listening loop of the server."""
 
     def _accept_conn(listen_sock, tracker_conn, ping_period=2):
         """Accept connection from the other places.
diff --git a/rust/tvm/README.md b/rust/tvm/README.md
index 01e088f..13aef89 100644
--- a/rust/tvm/README.md
+++ b/rust/tvm/README.md
@@ -54,7 +54,7 @@ with open(os.path.join(target_dir,"deploy_param.params"), "wb") as fo:
 
 Now, we need to input the artifacts to create and run the *Graph Runtime* to detect our input cat image
 
-![cat](https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true)
+![cat](https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true)
 
 as demostrated in the following Rust snippet
 
diff --git a/rust/tvm/examples/resnet/src/build_resnet.py b/rust/tvm/examples/resnet/src/build_resnet.py
index 324bb52..bc100fe 100644
--- a/rust/tvm/examples/resnet/src/build_resnet.py
+++ b/rust/tvm/examples/resnet/src/build_resnet.py
@@ -126,7 +126,7 @@ def transform_image(image):
 
 
 def get_cat_image():
-    img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+    img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
     img_path = download_testdata(img_url, "cat.png", module="data")
     shutil.copyfile(img_path, "cat.png")
     img = Image.open(img_path).resize((224, 224))
diff --git a/src/relay/backend/compile_engine.cc b/src/relay/backend/compile_engine.cc
index 3f7af37..b679fea 100644
--- a/src/relay/backend/compile_engine.cc
+++ b/src/relay/backend/compile_engine.cc
@@ -135,7 +135,7 @@ class ScheduleGetter : public backend::MemoizedExprTranslator<Array<te::Tensor>>
       candidate_name = truncated_name.str();
     }
     cache_node->func_name = candidate_name;
-    CHECK(master_op_.defined());
+    CHECK(anchor_op_.defined());
     // Fusion over tupled results may leave identity relationships
     // between inputs and outputs, and those should not be scheduled.
     // Hence schedule only non PlaceholderOp outputs.
@@ -147,9 +147,9 @@ class ScheduleGetter : public backend::MemoizedExprTranslator<Array<te::Tensor>>
     }
     te::Schedule schedule;
     // No need to register schedule for device copy op.
-    if (master_attrs_.as<DeviceCopyAttrs>() == nullptr) {
-      CHECK(master_implementation_.defined());
-      schedule = master_implementation_.Schedule(master_attrs_, tensor_outs, target_);
+    if (anchor_attrs_.as<DeviceCopyAttrs>() == nullptr) {
+      CHECK(anchor_implementation_.defined());
+      schedule = anchor_implementation_.Schedule(anchor_attrs_, tensor_outs, target_);
       for (const auto& scalar : scalars_) {
         if (schedule->Contain(scalar)) {
           schedule[scalar].compute_inline();
@@ -229,15 +229,15 @@ class ScheduleGetter : public backend::MemoizedExprTranslator<Array<te::Tensor>>
 
     int op_pattern = fpattern[op];
     if (op_pattern >= kCommReduce) {
-      CHECK(!master_op_.defined() || master_op_pattern_ < kCommReduce)
+      CHECK(!anchor_op_.defined() || anchor_op_pattern_ < kCommReduce)
           << "Two complicated op in a primitive function "
-          << " master=" << master_op_ << " current=" << op;
+          << " anchor=" << anchor_op_ << " current=" << op;
     }
-    if (op_pattern >= master_op_pattern_) {
-      master_op_ = op;
-      master_attrs_ = call_node->attrs;
-      master_op_pattern_ = op_pattern;
-      master_implementation_ = impl;
+    if (op_pattern >= anchor_op_pattern_) {
+      anchor_op_ = op;
+      anchor_attrs_ = call_node->attrs;
+      anchor_op_pattern_ = op_pattern;
+      anchor_implementation_ = impl;
     }
     if (outputs.size() != 1) {
       const auto* tuple_type = call_node->checked_type().as<TupleTypeNode>();
@@ -289,10 +289,10 @@ class ScheduleGetter : public backend::MemoizedExprTranslator<Array<te::Tensor>>
 
  private:
   tvm::Target target_;
-  Op master_op_;
-  Attrs master_attrs_;
-  int master_op_pattern_{0};
-  OpImplementation master_implementation_;
+  Op anchor_op_;
+  Attrs anchor_attrs_;
+  int anchor_op_pattern_{0};
+  OpImplementation anchor_implementation_;
   std::ostringstream readable_name_stream_;
   Array<te::Operation> scalars_;
   // Cache device copy op for equivalence checking to reduce registry lookup
diff --git a/src/relay/backend/vm/compiler.cc b/src/relay/backend/vm/compiler.cc
index 99ffea4..fb9ca08 100644
--- a/src/relay/backend/vm/compiler.cc
+++ b/src/relay/backend/vm/compiler.cc
@@ -1150,7 +1150,7 @@ void VMCompiler::Codegen() {
     }
     exec_->lib = tvm::build(build_funcs, target_host_);
   } else {
-    // There is no function handled by TVM. We create a virtual master module
+    // There is no function handled by TVM. We create a virtual main module
     // to make sure a DSO module will be also available.
     exec_->lib = codegen::CSourceModuleCreate(";", "");
   }
diff --git a/src/relay/transforms/fuse_ops.cc b/src/relay/transforms/fuse_ops.cc
index 85b74cc..10fa054 100644
--- a/src/relay/transforms/fuse_ops.cc
+++ b/src/relay/transforms/fuse_ops.cc
@@ -511,10 +511,10 @@ class GraphPartitioner {
     /*! \brief reference to the root node. */
     const tvm::Object* root_ref{nullptr};
     /*!
-     * \brief Reference to the master node,
+     * \brief Reference to the anchor node,
      * this field is not nullptr only if pattern is kOutEWiseFusable.
      */
-    const tvm::Object* master_ref{nullptr};
+    const tvm::Object* anchor_ref{nullptr};
     /*!
      * \brief Find the group root, perform path compression
      * \return The root type node.
@@ -614,10 +614,10 @@ class GraphPartitioner {
     // update the number of nodes of the parent group
     parent->num_nodes += child->num_nodes;
     child->parent = parent;
-    // update master ref and pattern
-    if (child->master_ref != nullptr) {
-      CHECK(parent->master_ref == nullptr);
-      parent->master_ref = child->master_ref;
+    // update anchor ref and pattern
+    if (child->anchor_ref != nullptr) {
+      CHECK(parent->anchor_ref == nullptr);
+      parent->anchor_ref = child->anchor_ref;
       parent->pattern = CombinePattern(child->pattern, parent->pattern);
     }
   }
@@ -681,9 +681,9 @@ class GraphPartitioner {
       auto* group_node = arena_->make<Group>();
       group_node->pattern = graph_node->pattern;
       group_node->root_ref = graph_node->ref;
-      // set master ref if necessary.
+      // set anchor ref if necessary.
       if (group_node->pattern == kOutEWiseFusable) {
-        group_node->master_ref = graph_node->ref;
+        group_node->anchor_ref = graph_node->ref;
       }
       groups_[nid] = group_node;
     }
@@ -756,7 +756,7 @@ class GraphPartitioner {
           auto fcond = [](OpPatternKind kind, bool is_sink) {
             if (!is_sink) {
               // Elemwise, broadcast, and injective ops on the parallel branches
-              // are allowed be fused to the elemwise/broadcast master.
+              // are allowed be fused to the elemwise/broadcast anchor.
               return kind <= kInjective;
             } else {
               return (kind <= kBroadcast || kind == kCommReduce || kind == kInjective ||
diff --git a/src/runtime/thread_pool.cc b/src/runtime/thread_pool.cc
index 0cc881c..bf41334 100644
--- a/src/runtime/thread_pool.cc
+++ b/src/runtime/thread_pool.cc
@@ -64,7 +64,7 @@ uint32_t GetSpinCount() {
 constexpr int kSyncStride = 64 / sizeof(std::atomic<int>);
 
 /*!
- * \brief Thread local master environment.
+ * \brief Thread local main environment.
  */
 class ParallelLauncher {
  public:
@@ -293,12 +293,12 @@ class ThreadPool {
     launcher->Init(flambda, cdata, num_task, need_sync != 0);
     SpscTaskQueue::Task tsk;
     tsk.launcher = launcher;
-    // if worker0 is taken by the master, queues_[0] is abandoned
+    // if worker0 is taken by the main, queues_[0] is abandoned
     for (int i = exclude_worker0_; i < num_task; ++i) {
       tsk.task_id = i;
       queues_[i]->Push(tsk);
     }
-    // use the master thread to run task 0
+    // use the main thread to run task 0
     if (exclude_worker0_) {
       TVMParallelGroupEnv* penv = &(tsk.launcher->env);
       if ((*tsk.launcher->flambda)(0, penv, cdata) == 0) {
@@ -346,7 +346,7 @@ class ThreadPool {
   int num_workers_;
   // number of workers used (can be restricted with affinity pref)
   int num_workers_used_;
-  // if or not to exclude worker 0 and use master to run task 0
+  // if or not to exclude worker 0 and use main to run task 0
   bool exclude_worker0_{true};
   std::vector<std::unique_ptr<SpscTaskQueue> > queues_;
   std::unique_ptr<tvm::runtime::threading::ThreadGroup> threads_;
diff --git a/src/runtime/threading_backend.cc b/src/runtime/threading_backend.cc
index 80564a2..019df3e 100644
--- a/src/runtime/threading_backend.cc
+++ b/src/runtime/threading_backend.cc
@@ -95,8 +95,8 @@ class ThreadGroup::Impl {
 
  private:
   // bind worker threads to disjoint cores
-  // if worker 0 is offloaded to master, i.e. exclude_worker0 is true,
-  // the master thread is bound to core 0.
+  // if worker 0 is offloaded to main, i.e. exclude_worker0 is true,
+  // the main thread is bound to core 0.
   void SetAffinity(bool exclude_worker0, bool reverse = false) {
 #if defined(__ANDROID__)
 #ifndef CPU_SET
@@ -130,9 +130,9 @@ class ThreadGroup::Impl {
       pthread_setaffinity_np(threads_[i].native_handle(), sizeof(cpu_set_t), &cpuset);
 #endif
     }
-    if (exclude_worker0) {  // master thread run task
+    if (exclude_worker0) {  // main thread run task
       // Master thread will have free migration on needed cores.
-      // Typically, the OS will schedule the master thread to run at core 0,
+      // Typically, the OS will schedule the main thread to run at core 0,
       // which is idle, when other workers are running.
       // See the comment inside SetMasterThreadFullCpuAffinity function to get more detail.
       SetMasterThreadFullCpuAffinity(reverse);
@@ -148,11 +148,11 @@ class ThreadGroup::Impl {
     // And we use config_threadpool API to set we will only use 4xA53.
     // The sorted_order will be [4, 5, 0, 1, 2, 3].
     // When to call this API, we have spawn threads on little cores for other workers
-    // in SetAffinity function. And for tvm master thread, it should also run on little cores,
+    // in SetAffinity function. And for tvm main thread, it should also run on little cores,
     // not big cores (4, 5).
 
     // Note: this works well on x86 too. Because x86 doesn't have BIG.LITTLE,
-    // our implementation will use kBig mode by default and will let master thread
+    // our implementation will use kBig mode by default and will let main thread
     // run on intended cores.
     if (reverse) {
       for (int i = 0; i < little_count_; ++i) {
diff --git a/tests/lint/clang_format.sh b/tests/lint/clang_format.sh
index de6711b..ad6a35b 100755
--- a/tests/lint/clang_format.sh
+++ b/tests/lint/clang_format.sh
@@ -17,7 +17,7 @@
 # under the License.
 
 
-# check lastest change, for squash merge into master
+# check lastest change, for squash merge into main
 ./tests/lint/git-clang-format.sh HEAD~1
-# chekc against origin/master for PRs.
-./tests/lint/git-clang-format.sh origin/master
+# chekc against origin/main for PRs.
+./tests/lint/git-clang-format.sh origin/main
diff --git a/tests/lint/git-black.sh b/tests/lint/git-black.sh
index 835c30d..993a2b2 100755
--- a/tests/lint/git-black.sh
+++ b/tests/lint/git-black.sh
@@ -32,7 +32,7 @@ if [[ "$#" -lt 1 ]]; then
     echo "Run black on Python files that changed since <commit>"
     echo "Examples:"
     echo "- Compare last one commit: tests/lint/git-black.sh HEAD~1"
-    echo "- Compare against upstream/master: tests/lint/git-black.sh upstream/master"
+    echo "- Compare against upstream/main: tests/lint/git-black.sh upstream/main"
     echo "The -i will use black to format files in-place instead of checking them."
     exit 1
 fi
diff --git a/tests/lint/git-clang-format.sh b/tests/lint/git-clang-format.sh
index 90f1835..37d0532 100755
--- a/tests/lint/git-clang-format.sh
+++ b/tests/lint/git-clang-format.sh
@@ -32,7 +32,7 @@ if [[ "$#" -lt 1 ]]; then
     echo "Run clang-format on files that changed since <commit>"
     echo "Examples:"
     echo "- Compare last one commit: tests/lint/git-clang-format.sh HEAD~1"
-    echo "- Compare against upstream/master: tests/lint/git-clang-format.sh upstream/master"
+    echo "- Compare against upstream/main: tests/lint/git-clang-format.sh upstream/main"
     echo "You can also add -i option to do inplace format"
     exit 1
 fi
diff --git a/tests/lint/python_format.sh b/tests/lint/python_format.sh
index 752abfd..2e907f8 100755
--- a/tests/lint/python_format.sh
+++ b/tests/lint/python_format.sh
@@ -18,4 +18,4 @@
 
 
 ./tests/lint/git-black.sh HEAD~1
-./tests/lint/git-black.sh origin/master
+./tests/lint/git-black.sh origin/main
diff --git a/tests/python/contrib/test_ethosn/infrastructure.py b/tests/python/contrib/test_ethosn/infrastructure.py
index e2c4055..8f07a37 100644
--- a/tests/python/contrib/test_ethosn/infrastructure.py
+++ b/tests/python/contrib/test_ethosn/infrastructure.py
@@ -32,7 +32,7 @@ from tvm.relay.op.contrib import get_pattern_table
 
 
 def get_real_image(im_height, im_width):
-    repo_base = "https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/"
+    repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
     img_name = "elephant-299.jpg"
     image_url = os.path.join(repo_base, img_name)
     img_path = download.download_testdata(image_url, img_name, module="data")
diff --git a/tests/python/driver/tvmc/conftest.py b/tests/python/driver/tvmc/conftest.py
index 21ebb0f..62af34e 100644
--- a/tests/python/driver/tvmc/conftest.py
+++ b/tests/python/driver/tvmc/conftest.py
@@ -138,7 +138,7 @@ def imagenet_cat(tmpdir_factory):
     tmpdir_name = tmpdir_factory.mktemp("data")
     cat_file_name = "imagenet_cat.npz"
 
-    cat_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+    cat_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
     image_path = download_testdata(cat_url, "inputs", module=["tvmc"])
     resized_image = Image.open(image_path).resize((224, 224))
     image_data = np.asarray(resized_image).astype("float32")
diff --git a/tests/python/frontend/darknet/test_forward.py b/tests/python/frontend/darknet/test_forward.py
index 9e21a86..77c72e7 100644
--- a/tests/python/frontend/darknet/test_forward.py
+++ b/tests/python/frontend/darknet/test_forward.py
@@ -33,7 +33,7 @@ from tvm.relay.testing.darknet import __darknetffi__
 from tvm.relay.frontend.darknet import ACTIVATION
 from tvm import relay
 
-REPO_URL = "https://github.com/dmlc/web-data/blob/master/darknet/"
+REPO_URL = "https://github.com/dmlc/web-data/blob/main/darknet/"
 DARKNET_LIB = "libdarknet2.0.so"
 DARKNETLIB_URL = REPO_URL + "lib/" + DARKNET_LIB + "?raw=true"
 LIB = __darknetffi__.dlopen(download_testdata(DARKNETLIB_URL, DARKNET_LIB, module="darknet"))
diff --git a/tests/python/frontend/mxnet/model_zoo/dcgan.py b/tests/python/frontend/mxnet/model_zoo/dcgan.py
index cf086bc..67c20cc 100644
--- a/tests/python/frontend/mxnet/model_zoo/dcgan.py
+++ b/tests/python/frontend/mxnet/model_zoo/dcgan.py
@@ -19,7 +19,7 @@
 The MXNet symbol of DCGAN generator
 
 Adopted from:
-https://github.com/tqchen/mxnet-gan/blob/master/mxgan/generator.py
+https://github.com/tqchen/mxnet-gan/blob/main/mxgan/generator.py
 
 Reference:
 Radford, Alec, Luke Metz, and Soumith Chintala.
diff --git a/tests/python/frontend/pytorch/qnn_test.py b/tests/python/frontend/pytorch/qnn_test.py
index ac24a1f..706f15b 100644
--- a/tests/python/frontend/pytorch/qnn_test.py
+++ b/tests/python/frontend/pytorch/qnn_test.py
@@ -346,7 +346,7 @@ def test_quantized_imagenet():
         )
 
     def get_real_image(im_height, im_width):
-        repo_base = "https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/"
+        repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
         img_name = "elephant-299.jpg"
         image_url = os.path.join(repo_base, img_name)
         img_path = download_testdata(image_url, img_name, module="data")
diff --git a/tests/python/frontend/tflite/test_forward.py b/tests/python/frontend/tflite/test_forward.py
index 7d67427..2798004 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -72,7 +72,7 @@ def convert_to_list(x):
 # Get a real image for e2e testing
 # --------------------------------
 def get_real_image(im_height, im_width):
-    repo_base = "https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/"
+    repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
     img_name = "elephant-299.jpg"
     image_url = os.path.join(repo_base, img_name)
     img_path = download_testdata(image_url, img_name, module="data")
@@ -83,7 +83,7 @@ def get_real_image(im_height, im_width):
 
 
 def pre_processed_image(height, width):
-    repo_base = "https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/"
+    repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
     img_name = "elephant-299.jpg"
     image_url = os.path.join(repo_base, img_name)
     img_path = download_testdata(image_url, img_name, module="data")
@@ -100,7 +100,7 @@ def pre_processed_image(height, width):
 
 
 def get_real_image_object_detection(im_height, im_width):
-    repo_base = "https://github.com/dmlc/web-data/raw/master/gluoncv/detection/"
+    repo_base = "https://github.com/dmlc/web-data/raw/main/gluoncv/detection/"
     img_name = "street_small.jpg"
     image_url = os.path.join(repo_base, img_name)
     img_path = download_testdata(image_url, img_name, module="data")
@@ -3649,7 +3649,7 @@ def test_forward_tflite2_qnn_resnet50():
     """Test the Quantized TFLite version 2.1.0 Resnet50 model."""
     if package_version.parse(tf.VERSION) >= package_version.parse("2.1.0"):
         tflite_model_file = download_testdata(
-            "https://raw.githubusercontent.com/dmlc/web-data/master/tensorflow/models/Quantized/resnet_50_quantized.tflite",
+            "https://raw.githubusercontent.com/dmlc/web-data/main/tensorflow/models/Quantized/resnet_50_quantized.tflite",
             "resnet_50_quantized.tflite",
         )
         with open(tflite_model_file, "rb") as f:
@@ -3670,7 +3670,7 @@ def test_forward_tflite2_qnn_inception_v1():
     """Test the Quantized TFLite version 2.1.0 Inception V1 model."""
     if package_version.parse(tf.VERSION) >= package_version.parse("2.1.0"):
         tflite_model_file = download_testdata(
-            "https://raw.githubusercontent.com/dmlc/web-data/master/tensorflow/models/Quantized/inception_v1_quantized.tflite",
+            "https://raw.githubusercontent.com/dmlc/web-data/main/tensorflow/models/Quantized/inception_v1_quantized.tflite",
             "inception_v1_quantized.tflite",
         )
         with open(tflite_model_file, "rb") as f:
@@ -3691,7 +3691,7 @@ def test_forward_tflite2_qnn_mobilenet_v2():
     """Test the Quantized TFLite version 2.1.0 Mobilenet V2 model."""
     if package_version.parse(tf.VERSION) >= package_version.parse("2.1.0"):
         tflite_model_file = download_testdata(
-            "https://raw.githubusercontent.com/dmlc/web-data/master/tensorflow/models/Quantized/mobilenet_v2_quantized.tflite",
+            "https://raw.githubusercontent.com/dmlc/web-data/main/tensorflow/models/Quantized/mobilenet_v2_quantized.tflite",
             "mobilenet_v2_quantized.tflite",
         )
         with open(tflite_model_file, "rb") as f:
@@ -3788,7 +3788,7 @@ def test_forward_qnn_coco_ssd_mobilenet_v1():
 def test_forward_coco_ssd_mobilenet_v1():
     """Test the FP32 Coco SSD Mobilenet V1 TF Lite model."""
     tflite_model_file = tf_testing.get_workload_official(
-        "https://raw.githubusercontent.com/dmlc/web-data/master/tensorflow/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tgz",
+        "https://raw.githubusercontent.com/dmlc/web-data/main/tensorflow/models/object_detection/ssd_mobilenet_v1_coco_2018_01_28.tgz",
         "ssd_mobilenet_v1_coco_2018_01_28.tflite",
     )
 
diff --git a/tutorials/autotvm/tune_relay_arm.py b/tutorials/autotvm/tune_relay_arm.py
index a336870..f024ba4 100644
--- a/tutorials/autotvm/tune_relay_arm.py
+++ b/tutorials/autotvm/tune_relay_arm.py
@@ -127,7 +127,7 @@ def get_network(name, batch_size):
 # measure the speed of code on the board.
 #
 # To scale up the tuning, TVM uses RPC Tracker to manage distributed devices.
-# The RPC Tracker is a centralized master node. We can register all devices to
+# The RPC Tracker is a centralized controller node. We can register all devices to
 # the tracker. For example, if we have 10 phones, we can register all of them
 # to the tracker, and run 10 measurements in parallel, accelerating the tuning process.
 #
@@ -162,7 +162,7 @@ def get_network(name, batch_size):
 #   (replace :code:`[HOST_IP]` with the IP address of your host machine)
 #
 # * For Android:
-#   Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
+#   Follow this `readme page <https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc>`_ to
 #   install the TVM RPC APK on the android device. Make sure you can pass the android rpc test.
 #   Then you have already registered your device. During tuning, you have to go to developer option
 #   and enable "Keep screen awake during changing" and charge your phone to make it stable.
diff --git a/tutorials/autotvm/tune_relay_cuda.py b/tutorials/autotvm/tune_relay_cuda.py
index 32ee266..4636103 100644
--- a/tutorials/autotvm/tune_relay_cuda.py
+++ b/tutorials/autotvm/tune_relay_cuda.py
@@ -322,7 +322,7 @@ def tune_and_evaluate(tuning_opt):
 #
 # If you have multiple devices, you can use all of them for measurement.
 # TVM uses the RPC Tracker to manage distributed devices.
-# The RPC Tracker is a centralized master node. We can register all devices to
+# The RPC Tracker is a centralized controller node. We can register all devices to
 # the tracker. For example, if we have 10 GPU cards, we can register all of them
 # to the tracker, and run 10 measurements in parallel, accelerating the tuning process.
 #
diff --git a/tutorials/autotvm/tune_relay_mobile_gpu.py b/tutorials/autotvm/tune_relay_mobile_gpu.py
index 19fa601..6125466 100644
--- a/tutorials/autotvm/tune_relay_mobile_gpu.py
+++ b/tutorials/autotvm/tune_relay_mobile_gpu.py
@@ -126,7 +126,7 @@ def get_network(name, batch_size):
 # measure the speed of code on the board.
 #
 # To scale up the tuning, TVM uses RPC Tracker to manage distributed devices.
-# The RPC Tracker is a centralized master node. We can register all devices to
+# The RPC Tracker is a centralized controller node. We can register all devices to
 # the tracker. For example, if we have 10 phones, we can register all of them
 # to the tracker, and run 10 measurements in parallel, accelerating the tuning process.
 #
@@ -161,7 +161,7 @@ def get_network(name, batch_size):
 #   (replace :code:`[HOST_IP]` with the IP address of your host machine)
 #
 # * For Android:
-#   Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
+#   Follow this `readme page <https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc>`_ to
 #   install TVM RPC APK on the android device. Make sure you can pass the android RPC test.
 #   Then you have already registered your device. During tuning, you have to go to developer option
 #   and enable "Keep screen awake during changing" and charge your phone to make it stable.
diff --git a/tutorials/dev/bring_your_own_datatypes.py b/tutorials/dev/bring_your_own_datatypes.py
index 07592e7..c85ec07 100644
--- a/tutorials/dev/bring_your_own_datatypes.py
+++ b/tutorials/dev/bring_your_own_datatypes.py
@@ -116,7 +116,7 @@ tvm.target.datatype.register("myfloat", 150)
 
 ######################################################################
 # Note that the type code, 150, is currently chosen manually by the user.
-# See ``TVMTypeCode::kCustomBegin`` in `include/tvm/runtime/c_runtime_api.h <https://github.com/apache/incubator-tvm/blob/master/include/tvm/runtime/data_type.h>`_.
+# See ``TVMTypeCode::kCustomBegin`` in `include/tvm/runtime/c_runtime_api.h <https://github.com/apache/incubator-tvm/blob/main/include/tvm/runtime/data_type.h>`_.
 # Now we can generate our program again:
 
 x_myfloat = relay.cast(x, dtype="custom[myfloat]32")
@@ -176,7 +176,7 @@ tvm.target.datatype.register_op(
 # To provide for the general case, we have made a helper function, ``create_lower_func(...)``,
 # which does just this: given a dictionary, it replaces the given operation with a ``Call`` to the appropriate function name provided based on the op and the bit widths.
 # It additionally removes usages of the custom datatype by storing the custom datatype in an opaque ``uint`` of the appropriate width; in our case, a ``uint32_t``.
-# For more information, see `the source code <https://github.com/apache/incubator-tvm/blob/master/python/tvm/target/datatype.py>`_.
+# For more information, see `the source code <https://github.com/apache/incubator-tvm/blob/main/python/tvm/target/datatype.py>`_.
 
 # We can now re-try running the program:
 try:
diff --git a/tutorials/frontend/deploy_model_on_android.py b/tutorials/frontend/deploy_model_on_android.py
index 3bf55d9..851810a 100644
--- a/tutorials/frontend/deploy_model_on_android.py
+++ b/tutorials/frontend/deploy_model_on_android.py
@@ -106,7 +106,7 @@ from tvm.contrib.download import download_testdata
 # --------------------------------------
 # Now we can register our Android device to the tracker.
 #
-# Follow this `readme page <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc>`_ to
+# Follow this `readme page <https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc>`_ to
 # install TVM RPC APK on the android device.
 #
 # Here is an example of config.mk. I enabled OpenCL and Vulkan.
@@ -139,7 +139,7 @@ from tvm.contrib.download import download_testdata
 #
 # .. note::
 #
-#   At this time, don't forget to `create a standalone toolchain <https://github.com/apache/incubator-tvm/tree/master/apps/android_rpc#architecture-and-android-standalone-toolchain>`_ .
+#   At this time, don't forget to `create a standalone toolchain <https://github.com/apache/incubator-tvm/tree/main/apps/android_rpc#architecture-and-android-standalone-toolchain>`_ .
 #
 #   for example
 #
@@ -206,7 +206,7 @@ keras_mobilenet_v2.load_weights(weights_path)
 ######################################################################
 # In order to test our model, here we download an image of cat and
 # transform its format.
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_name = "cat.png"
 img_path = download_testdata(img_url, img_name, module="data")
 image = Image.open(img_path).resize((224, 224))
diff --git a/tutorials/frontend/deploy_model_on_rasp.py b/tutorials/frontend/deploy_model_on_rasp.py
index c6e2d8f..8b49a21 100644
--- a/tutorials/frontend/deploy_model_on_rasp.py
+++ b/tutorials/frontend/deploy_model_on_rasp.py
@@ -109,7 +109,7 @@ block = get_model("resnet18_v1", pretrained=True)
 ######################################################################
 # In order to test our model, here we download an image of cat and
 # transform its format.
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_name = "cat.png"
 img_path = download_testdata(img_url, img_name, module="data")
 image = Image.open(img_path).resize((224, 224))
diff --git a/tutorials/frontend/deploy_prequantized.py b/tutorials/frontend/deploy_prequantized.py
index 81959db..e9f1a4c 100644
--- a/tutorials/frontend/deploy_prequantized.py
+++ b/tutorials/frontend/deploy_prequantized.py
@@ -59,7 +59,7 @@ def get_transform():
 
 
 def get_real_image(im_height, im_width):
-    img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+    img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
     img_path = download_testdata(img_url, "cat.png", module="data")
     return Image.open(img_path).resize((im_height, im_width))
 
diff --git a/tutorials/frontend/deploy_prequantized_tflite.py b/tutorials/frontend/deploy_prequantized_tflite.py
index 52321b1..121ad9d 100644
--- a/tutorials/frontend/deploy_prequantized_tflite.py
+++ b/tutorials/frontend/deploy_prequantized_tflite.py
@@ -101,7 +101,7 @@ extract(model_path)
 def get_real_image(im_height, im_width):
     from PIL import Image
 
-    repo_base = "https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/"
+    repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
     img_name = "elephant-299.jpg"
     image_url = os.path.join(repo_base, img_name)
     img_path = download_testdata(image_url, img_name, module="data")
diff --git a/tutorials/frontend/deploy_ssd_gluoncv.py b/tutorials/frontend/deploy_ssd_gluoncv.py
index d874487..f1f1bbb 100644
--- a/tutorials/frontend/deploy_ssd_gluoncv.py
+++ b/tutorials/frontend/deploy_ssd_gluoncv.py
@@ -73,7 +73,7 @@ dshape = (1, 3, 512, 512)
 # Download and pre-process demo image
 
 im_fname = download_testdata(
-    "https://github.com/dmlc/web-data/blob/master/" + "gluoncv/detection/street_small.jpg?raw=true",
+    "https://github.com/dmlc/web-data/blob/main/" + "gluoncv/detection/street_small.jpg?raw=true",
     "street_small.jpg",
     module="data",
 )
diff --git a/tutorials/frontend/from_caffe2.py b/tutorials/frontend/from_caffe2.py
index 4f6f647..34581c6 100644
--- a/tutorials/frontend/from_caffe2.py
+++ b/tutorials/frontend/from_caffe2.py
@@ -61,7 +61,7 @@ from PIL import Image
 from matplotlib import pyplot as plt
 import numpy as np
 
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_path = download_testdata(img_url, "cat.png", module="data")
 img = Image.open(img_path).resize((224, 224))
 plt.imshow(img)
diff --git a/tutorials/frontend/from_coreml.py b/tutorials/frontend/from_coreml.py
index 4e3f391..c868a7f 100644
--- a/tutorials/frontend/from_coreml.py
+++ b/tutorials/frontend/from_coreml.py
@@ -57,7 +57,7 @@ mlmodel = cm.models.MLModel(model_path)
 # Load a test image
 # ------------------
 # A single cat dominates the examples!
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_path = download_testdata(img_url, "cat.png", module="data")
 img = Image.open(img_path).resize((224, 224))
 # Mobilenet.mlmodel's input is BGR format
diff --git a/tutorials/frontend/from_darknet.py b/tutorials/frontend/from_darknet.py
index 4cbafaf..fc77079 100644
--- a/tutorials/frontend/from_darknet.py
+++ b/tutorials/frontend/from_darknet.py
@@ -60,7 +60,7 @@ MODEL_NAME = "yolov3"
 # Download cfg and weights file if first time.
 CFG_NAME = MODEL_NAME + ".cfg"
 WEIGHTS_NAME = MODEL_NAME + ".weights"
-REPO_URL = "https://github.com/dmlc/web-data/blob/master/darknet/"
+REPO_URL = "https://github.com/dmlc/web-data/blob/main/darknet/"
 CFG_URL = REPO_URL + "cfg/" + CFG_NAME + "?raw=true"
 WEIGHTS_URL = "https://pjreddie.com/media/files/" + WEIGHTS_NAME
 
diff --git a/tutorials/frontend/from_keras.py b/tutorials/frontend/from_keras.py
index a68df55..3dcefd5 100644
--- a/tutorials/frontend/from_keras.py
+++ b/tutorials/frontend/from_keras.py
@@ -66,7 +66,7 @@ from PIL import Image
 from matplotlib import pyplot as plt
 from keras.applications.resnet50 import preprocess_input
 
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_path = download_testdata(img_url, "cat.png", module="data")
 img = Image.open(img_path).resize((224, 224))
 plt.imshow(img)
diff --git a/tutorials/frontend/from_mxnet.py b/tutorials/frontend/from_mxnet.py
index d81b211..3eeef87 100644
--- a/tutorials/frontend/from_mxnet.py
+++ b/tutorials/frontend/from_mxnet.py
@@ -51,7 +51,7 @@ from PIL import Image
 from matplotlib import pyplot as plt
 
 block = get_model("resnet18_v1", pretrained=True)
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_name = "cat.png"
 synset_url = "".join(
     [
diff --git a/tutorials/frontend/from_onnx.py b/tutorials/frontend/from_onnx.py
index 22c839c..141defe 100644
--- a/tutorials/frontend/from_onnx.py
+++ b/tutorials/frontend/from_onnx.py
@@ -63,7 +63,7 @@ onnx_model = onnx.load(model_path)
 # A single cat dominates the examples!
 from PIL import Image
 
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_path = download_testdata(img_url, "cat.png", module="data")
 img = Image.open(img_path).resize((224, 224))
 img_ycbcr = img.convert("YCbCr")  # convert to YCbCr
diff --git a/tutorials/frontend/from_pytorch.py b/tutorials/frontend/from_pytorch.py
index 2328651..33a0588 100644
--- a/tutorials/frontend/from_pytorch.py
+++ b/tutorials/frontend/from_pytorch.py
@@ -70,7 +70,7 @@ scripted_model = torch.jit.trace(model, input_data).eval()
 # Classic cat example!
 from PIL import Image
 
-img_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+img_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 img_path = download_testdata(img_url, "cat.png", module="data")
 img = Image.open(img_path).resize((224, 224))
 
diff --git a/tutorials/frontend/from_tensorflow.py b/tutorials/frontend/from_tensorflow.py
index a3e8173..5cdc395 100644
--- a/tutorials/frontend/from_tensorflow.py
+++ b/tutorials/frontend/from_tensorflow.py
@@ -45,7 +45,7 @@ except ImportError:
 import tvm.relay.testing.tf as tf_testing
 
 # Base location for model related files.
-repo_base = "https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/"
+repo_base = "https://github.com/dmlc/web-data/raw/main/tensorflow/models/InceptionV1/"
 
 # Test image
 img_name = "elephant-299.jpg"
diff --git a/tutorials/frontend/from_tflite.py b/tutorials/frontend/from_tflite.py
index ee7da62..a3014f9 100644
--- a/tutorials/frontend/from_tflite.py
+++ b/tutorials/frontend/from_tflite.py
@@ -105,7 +105,7 @@ from PIL import Image
 from matplotlib import pyplot as plt
 import numpy as np
 
-image_url = "https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true"
+image_url = "https://github.com/dmlc/mxnet.js/blob/main/data/cat.png?raw=true"
 image_path = download_testdata(image_url, "cat.png", module="data")
 resized_image = Image.open(image_path).resize((224, 224))
 plt.imshow(resized_image)
diff --git a/tutorials/get_started/relay_quick_start.py b/tutorials/get_started/relay_quick_start.py
index 5c7f933..cece7ab 100644
--- a/tutorials/get_started/relay_quick_start.py
+++ b/tutorials/get_started/relay_quick_start.py
@@ -31,7 +31,7 @@ Notice that you need to build TVM with cuda and llvm enabled.
 # ----------------------------------------------
 # The image below shows hardware backend currently supported by TVM:
 #
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tvm_support_list.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/tvm_support_list.png
 #      :align: center
 #
 # In this tutorial, we'll choose cuda and llvm as target backends.
diff --git a/tutorials/language/tedd.py b/tutorials/language/tedd.py
index e0b8038..34ad43c 100644
--- a/tutorials/language/tedd.py
+++ b/tutorials/language/tedd.py
@@ -81,7 +81,7 @@ tedd.viz_dataflow_graph(s, dot_file_path="/tmp/dfg.dot")
 # tedd.viz_dataflow_graph(s, show_svg = True)
 
 ######################################################################
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tedd_dfg.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/tedd_dfg.png
 #      :align: center
 #
 # The first one is a dataflow graph.  Every node represents a stage with name and memory
@@ -105,7 +105,7 @@ tedd.viz_schedule_tree(s, dot_file_path="/tmp/scheduletree2.dot")
 # tedd.viz_schedule_tree(s, show_svg = True)
 
 ######################################################################
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tedd_st.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/tedd_st.png
 #      :align: center
 #
 # Now, let us take a close look at the second schedule tree.  Every block under ROOT
@@ -138,7 +138,7 @@ tedd.viz_itervar_relationship_graph(s, dot_file_path="/tmp/itervar.dot")
 # tedd.viz_itervar_relationship_graph(s, show_svg = True)
 
 ######################################################################
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tedd_itervar_rel.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/tedd_itervar_rel.png
 #      :align: center
 #
 # The last one is an IterVar Relationship Graph.  Every subgraph represents a
diff --git a/tutorials/optimize/opt_conv_cuda.py b/tutorials/optimize/opt_conv_cuda.py
index f50d302..9cb29b5 100644
--- a/tutorials/optimize/opt_conv_cuda.py
+++ b/tutorials/optimize/opt_conv_cuda.py
@@ -91,7 +91,7 @@ B = te.compute(
 # programmers. Thus how to maximize the data reuse in the shared memory is
 # critical to achieve high performance in GPU kernels.
 #
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/gpu_memory_hierarchy.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/gpu_memory_hierarchy.png
 #      :align: center
 #      :height: 319px
 #      :width: 271px
@@ -125,7 +125,7 @@ BL = s.cache_write(B, "local")
 # x block_factor (8 x 64) data from Apad and B each time to buffers in the
 # shared memory.
 #
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/conv_gpu_blocking.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/conv_gpu_blocking.png
 #      :align: center
 #      :height: 308px
 #      :width: 317px
@@ -167,7 +167,7 @@ s[B].bind(bx, block_x)
 # parts, and then tile into 8x8 grids. Therefore, shown in the figure below,
 # each thread computes 4 strided grids, where size of each grid is 4 x 4.
 #
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/conv_gpu_vthread.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/conv_gpu_vthread.png
 #      :align: center
 #      :height: 188px
 #      :width: 268px
diff --git a/tutorials/optimize/opt_gemm.py b/tutorials/optimize/opt_gemm.py
index ead6660..971269d 100644
--- a/tutorials/optimize/opt_gemm.py
+++ b/tutorials/optimize/opt_gemm.py
@@ -231,7 +231,7 @@ print(tvm.lower(s, [A, B, C], simple_mode=True))
 # array to convert the continuous access pattern on certain dimension to a sequential pattern after
 # flattening.
 #
-# .. image:: https://github.com/dmlc/web-data/raw/master/tvm/tutorial/array-packing.png
+# .. image:: https://github.com/dmlc/web-data/raw/main/tvm/tutorial/array-packing.png
 #      :align: center
 #
 
diff --git a/vta/tutorials/autotvm/tune_relay_vta.py b/vta/tutorials/autotvm/tune_relay_vta.py
index 41fd04e..cb36040 100644
--- a/vta/tutorials/autotvm/tune_relay_vta.py
+++ b/vta/tutorials/autotvm/tune_relay_vta.py
@@ -119,7 +119,7 @@ def compile_network(env, target, model, start_pack, stop_pack):
 # measure the speed of code on the board.
 #
 # To scale up tuning, TVM uses an RPC Tracker to manage multiple devices.
-# The RPC Tracker is a centralized master node. We can register all devices to
+# The RPC Tracker is a centralized controller node. We can register all devices to
 # the tracker. For example, if we have 10 Pynq boards, we can register all of them
 # to the tracker, and run 10 measurements in parallel, accelerating the tuning process.
 #
diff --git a/vta/tutorials/frontend/deploy_classification.py b/vta/tutorials/frontend/deploy_classification.py
index 04716ce..582eb03 100644
--- a/vta/tutorials/frontend/deploy_classification.py
+++ b/vta/tutorials/frontend/deploy_classification.py
@@ -220,7 +220,7 @@ with autotvm.tophub.context(target):
 # and an input test image.
 
 # Download ImageNet categories
-categ_url = "https://github.com/uwsaml/web-data/raw/master/vta/models/"
+categ_url = "https://github.com/uwsampl/web-data/raw/main/vta/models/"
 categ_fn = "synset.txt"
 download.download(join(categ_url, categ_fn), categ_fn)
 synset = eval(open(categ_fn).read())
diff --git a/vta/tutorials/frontend/legacy/deploy_detection.py b/vta/tutorials/frontend/legacy/deploy_detection.py
index 010ee31..f2c42c1 100644
--- a/vta/tutorials/frontend/legacy/deploy_detection.py
+++ b/vta/tutorials/frontend/legacy/deploy_detection.py
@@ -71,7 +71,7 @@ assert tvm.runtime.enabled("rpc")
 # Model Name
 # ----------------------------------------------------------------------------
 MODEL_NAME = "yolov3-tiny"
-REPO_URL = "https://github.com/dmlc/web-data/blob/master/darknet/"
+REPO_URL = "https://github.com/dmlc/web-data/blob/main/darknet/"
 
 cfg_path = download_testdata(
     "https://github.com/pjreddie/darknet/blob/master/cfg/" + MODEL_NAME + ".cfg" + "?raw=true",
diff --git a/vta/tutorials/matrix_multiply.py b/vta/tutorials/matrix_multiply.py
index 77fc805..71a8f67 100644
--- a/vta/tutorials/matrix_multiply.py
+++ b/vta/tutorials/matrix_multiply.py
@@ -86,7 +86,7 @@ elif env.TARGET in ["sim", "tsim"]:
 # The last operation is a cast and copy back to DRAM, into results tensor
 # :code:`C`.
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/gemm_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/gemm_dataflow.png
 #      :align: center
 
 ######################################################################
@@ -107,7 +107,7 @@ elif env.TARGET in ["sim", "tsim"]:
 #   adding the result matrix to an accumulator matrix, as shown in the
 #   figure below.
 #
-#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/tensor_core.png
+#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/tensor_core.png
 #        :align: center
 #        :width: 480px
 #
@@ -126,7 +126,7 @@ elif env.TARGET in ["sim", "tsim"]:
 #   contiguous.
 #   The resulting tiled tensor has a shape of (2, 4, 2, 2).
 #
-#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/data_tiling.png
+#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/data_tiling.png
 #        :align: center
 #        :width: 480px
 #
diff --git a/vta/tutorials/optimize/convolution_opt.py b/vta/tutorials/optimize/convolution_opt.py
index 3f079e8..2888f34 100644
--- a/vta/tutorials/optimize/convolution_opt.py
+++ b/vta/tutorials/optimize/convolution_opt.py
@@ -93,7 +93,7 @@ elif env.TARGET in ["sim", "tsim"]:
 # convolution followed by a rectified linear activation.
 # We describe the TVM dataflow graph of the 2D convolution layer below:
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/conv2d_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/conv2d_dataflow.png
 #      :align: center
 #
 # This computation is intentionally too large to fit onto VTA's on-chip
@@ -120,7 +120,7 @@ elif env.TARGET in ["sim", "tsim"]:
 #   loaded from DRAM into VTA's SRAM, following a 2D strided and padded memory
 #   read.
 #
-#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/padding.png
+#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/padding.png
 #        :align: center
 #        :width: 480px
 
@@ -292,7 +292,7 @@ s[res_conv].reorder(ic_out, b_inn, oc_inn, y_inn, ic_inn, dy, dx, x_inn, b_tns,
 # We show how work is split when computing the 2D convolution in the figure
 # below.
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/virtual_threading.png
+# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/virtual_threading.png
 #      :align: center
 #      :width: 480px
 
diff --git a/vta/tutorials/optimize/matrix_multiply_opt.py b/vta/tutorials/optimize/matrix_multiply_opt.py
index 28600d4..8797c3e 100644
--- a/vta/tutorials/optimize/matrix_multiply_opt.py
+++ b/vta/tutorials/optimize/matrix_multiply_opt.py
@@ -88,7 +88,7 @@ elif env.TARGET in ["sim", "tsim"]:
 # matrix multiplication followed by a rectified linear activation.
 # We describe the TVM dataflow graph of the fully connected layer below:
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/fc_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/fc_dataflow.png
 #      :align: center
 #
 # This computation is intentionally too large to fit onto VTA's on-chip
@@ -183,7 +183,7 @@ print(tvm.lower(s, [data, weight, res], simple_mode=True))
 # We show the outcome of blocking on the computation schedule in the diagram
 # below:
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/blocking.png
+# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/blocking.png
 #      :align: center
 #      :width: 480px
 #
diff --git a/vta/tutorials/vta_get_started.py b/vta/tutorials/vta_get_started.py
index 46b050f..8f37b2d 100644
--- a/vta/tutorials/vta_get_started.py
+++ b/vta/tutorials/vta_get_started.py
@@ -115,7 +115,7 @@ elif env.TARGET == "sim":
 # The last operation is a cast and copy back to DRAM, into results tensor
 # :code:`C`.
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/vadd_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/main/vta/tutorial/vadd_dataflow.png
 #      :align: center
 
 ######################################################################
diff --git a/web/README.md b/web/README.md
index 43540c6..b4d7eb1 100644
--- a/web/README.md
+++ b/web/README.md
@@ -63,11 +63,11 @@ This command will create the tvmjs library that we can use to interface with the
 
 Check code snippet in
 
-- [tests/python/prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/master/web/tests/python/prepare_test_libs.py)
+- [tests/python/prepare_test_libs.py](https://github.com/apache/incubator-tvm/tree/main/web/tests/python/prepare_test_libs.py)
   shows how to create a wasm library that links with tvm runtime.
   - Note that all wasm libraries have to created using the `--system-lib` option
   - emcc.create_wasm will automatically link the runtime library `dist/wasm/libtvm_runtime.bc`
-- [tests/web/test_module_load.js](https://github.com/apache/incubator-tvm/tree/master/web/tests/node/test_module_load.js) demonstrate
+- [tests/web/test_module_load.js](https://github.com/apache/incubator-tvm/tree/main/web/tests/node/test_module_load.js) demonstrate
   how to run the generated library through tvmjs API.