You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2020/03/30 18:16:56 UTC

[incubator-tvm-site] branch master updated: Points docs to the new location

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/master by this push:
     new 455aeb1  Points docs to the new location
455aeb1 is described below

commit 455aeb1f33e3f7f4c9cf5b18aa082e46f54bdd58
Author: tqchen <ti...@gmail.com>
AuthorDate: Mon Mar 30 11:15:57 2020 -0700

    Points docs to the new location
---
 _posts/2018-07-12-vta-release-announcement.markdown | 2 +-
 _posts/2018-08-10-DLPack-Bridge.md                  | 2 +-
 _posts/2018-10-03-auto-opt-all.md                   | 6 +++---
 _posts/2019-01-19-Golang.md                         | 4 ++--
 _posts/2019-03-18-tvm-apache-announcement.md        | 2 +-
 _posts/2019-04-30-opt-cuda-quantized.md             | 8 ++++----
 community.md                                        | 2 +-
 7 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/_posts/2018-07-12-vta-release-announcement.markdown b/_posts/2018-07-12-vta-release-announcement.markdown
index c65db37..107cf10 100644
--- a/_posts/2018-07-12-vta-release-announcement.markdown
+++ b/_posts/2018-07-12-vta-release-announcement.markdown
@@ -149,5 +149,5 @@ VTA is a research project that came out of the SAML group, which is generously s
 
 ## Get Started!
 - TVM and VTA Github page can be found here: [https://github.com/dmlc/tvm](https://github.com/dmlc/tvm).
-- You can get started with easy to follow [tutorials on programming VTA with TVM](https://docs.tvm.ai/vta/tutorials/index.html).
+- You can get started with easy to follow [tutorials on programming VTA with TVM](https://tvm.apache.org/docs//vta/tutorials/index.html).
 - For more technical details on VTA, read our [VTA technical report](https://arxiv.org/abs/1807.04188) on ArXiv.
\ No newline at end of file
diff --git a/_posts/2018-08-10-DLPack-Bridge.md b/_posts/2018-08-10-DLPack-Bridge.md
index f85e7d9..fb4b2e2 100644
--- a/_posts/2018-08-10-DLPack-Bridge.md
+++ b/_posts/2018-08-10-DLPack-Bridge.md
@@ -95,7 +95,7 @@ schedule:
 For brevity, we do not cover TVM's large collection of scheduling primitives
 that we can use to optimize matrix multiplication. If you wish to make a custom
 GEMM operator run _fast_ on your hardware device, a detailed tutorial can be
-found [here](https://docs.tvm.ai/tutorials/optimize/opt_gemm.html).
+found [here](https://tvm.apache.org/docs//tutorials/optimize/opt_gemm.html).
 
 We then convert the TVM function into one that supports PyTorch tensors:
 ```python
diff --git a/_posts/2018-10-03-auto-opt-all.md b/_posts/2018-10-03-auto-opt-all.md
index b37ce2e..5c13edf 100644
--- a/_posts/2018-10-03-auto-opt-all.md
+++ b/_posts/2018-10-03-auto-opt-all.md
@@ -190,9 +190,9 @@ for inference deployment. TVM just provides such a solution.
 
 ## Links
 [1] benchmark: [https://github.com/dmlc/tvm/tree/master/apps/benchmark](https://github.com/dmlc/tvm/tree/master/apps/benchmark)  
-[2] Tutorial on tuning for ARM CPU: [https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html](https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html)  
-[3] Tutorial on tuning for Mobile GPU: [https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html](https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html)  
-[4] Tutorial on tuning for NVIDIA/AMD GPU: [https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html](https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html)  
+[2] Tutorial on tuning for ARM CPU: [https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html](https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html)  
+[3] Tutorial on tuning for Mobile GPU: [https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html](https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html)  
+[4] Tutorial on tuning for NVIDIA/AMD GPU: [https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html](https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html)  
 [5] Paper about AutoTVM: [Learning to Optimize Tensor Program](https://arxiv.org/abs/1805.08166)  
 [6] Paper about Intel CPU (by AWS contributors) :  [Optimizing CNN Model Inference on CPUs](https://arxiv.org/abs/1809.02697)
 
diff --git a/_posts/2019-01-19-Golang.md b/_posts/2019-01-19-Golang.md
index c025763..7825345 100644
--- a/_posts/2019-01-19-Golang.md
+++ b/_posts/2019-01-19-Golang.md
@@ -19,7 +19,7 @@ deploy deep learning models from a variety of frameworks to a choice of hardware
 
 The TVM import and compilation process generates a graph JSON, a module and a params. Any application that
 integrates the TVM runtime can load these compiled modules and perform inference. A detailed tutorial of module
-import and compilation using TVM can be found at [tutorials](https://docs.tvm.ai/tutorials/).
+import and compilation using TVM can be found at [tutorials](https://tvm.apache.org/docs//tutorials/).
 
 TVM now supports deploying compiled modules through Golang. Golang applications can make use of this
 to deploy the deep learning models through TVM. The scope of this blog is the introduction of ```gotvm``` package,
@@ -51,7 +51,7 @@ Developers can make use of TVM to import and compile deep learning models and ge
 {:center}
 <center> Import, Compile, Integrate and Deploy</center> <p></p>
 
-TVM [Compile Deep Learning Models](https://docs.tvm.ai/tutorials/#compile-deep-learning-models) tutorials
+TVM [Compile Deep Learning Models](https://tvm.apache.org/docs//tutorials/#compile-deep-learning-models) tutorials
 are available to compile models from all frameworks supported by the TVM frontend. This compilation process
 generates the artifacts required to integrate and deploy the model on a target.
 
diff --git a/_posts/2019-03-18-tvm-apache-announcement.md b/_posts/2019-03-18-tvm-apache-announcement.md
index b90cd2e..6fe0f60 100644
--- a/_posts/2019-03-18-tvm-apache-announcement.md
+++ b/_posts/2019-03-18-tvm-apache-announcement.md
@@ -21,4 +21,4 @@ Besides the technical innovations, the community adopts an open, welcoming and n
 We would like to take this chance to thank the Allen School for supporting the SAMPL team that gave birth to the TVM project. We would also like to thank the Halide project which provided the basis for TVM’s loop-level IR and initial code generation. We would like to thank our Apache incubator mentors for introducing the project to Apache and providing useful guidance. Finally, we would like to thank the TVM community and all of the organizations, as listed above, that supported the deve [...]
 
 
-See also the [Allen School news about the transition here](https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/), [TVM conference program slides and recordings](https://sampl.cs.washington.edu/tvmconf/#about-tvmconf), and [our community guideline here](https://docs.tvm.ai/contribute/community.html). Follow us on Twitter: [@ApacheTVM](https://twitter.com/ApacheTVM).
+See also the [Allen School news about the transition here](https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/), [TVM conference program slides and recordings](https://sampl.cs.washington.edu/tvmconf/#about-tvmconf), and [our community guideline here](https://tvm.apache.org/docs//contribute/community.html). Follow us on Twitter: [@ApacheTVM](https://twitter.com/ApacheTVM).
diff --git a/_posts/2019-04-30-opt-cuda-quantized.md b/_posts/2019-04-30-opt-cuda-quantized.md
index a96f3b8..ecacd6e 100644
--- a/_posts/2019-04-30-opt-cuda-quantized.md
+++ b/_posts/2019-04-30-opt-cuda-quantized.md
@@ -44,7 +44,7 @@ To illustrate, in 2d convolution we accumulate along the channel, the width, and
 This is a typical use case of `dp4a`.
 TVM uses tensorization to support calling external intrinsics.
 We do not need to modify the original computation declaration; we use the schedule primitive `tensorize` to replace the accumulation with `dp4a` tensor intrinsic.
-More details of tensorization can be found in the [tutorial](https://docs.tvm.ai/tutorials/language/tensorize.html).
+More details of tensorization can be found in the [tutorial](https://tvm.apache.org/docs//tutorials/language/tensorize.html).
 
 ## Data Layout Rearrangement
 One of the challenges in tensorization is that we may need to design special computation logic to adapt to the requirement of tensor intrinsics.
@@ -87,7 +87,7 @@ We also do some manual tiling such as splitting axes by 4 or 16 to facilitate ve
 In quantized 2d convolution, we design a search space that includes a set of tunable options, such as the tile size, the axes to fuse, configurations of loop unrolling and double buffering.
 The templates of quantized `conv2d` and `dense` on CUDA are registered under template key `int8`.
 During automatic tuning, we can create tuning tasks for these quantized operators by setting the `template_key` argument.
-Details of how to launch automatic optimization can be found in the [AutoTVM tutorial](https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html).
+Details of how to launch automatic optimization can be found in the [AutoTVM tutorial](https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html).
 
 # General Workflow
 
@@ -109,7 +109,7 @@ Next, we use the relay quantization API to convert it to a quantized model.
 net = relay.quantize.quantize(net, params=params)
 ```
 
-Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The [AutoTVM tutorial](https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html) provides an example for this.
+Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The [AutoTVM tutorial](https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html) provides an example for this.
 
 Finally, we build the model and run inference in the quantized mode.
 ```python
@@ -117,7 +117,7 @@ with relay.build_config(opt_level=3):
     graph, lib, params = relay.build(net, target)
 ```
 The result of `relay.build` is a deployable library.
-We can either run inference [on the GPU](https://docs.tvm.ai/tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm) directly or deploy [on the remote devices](https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc) via RPC.
+We can either run inference [on the GPU](https://tvm.apache.org/docs//tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm) directly or deploy [on the remote devices](https://tvm.apache.org/docs//tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc) via RPC.
 
 # Benchmark
 To verify the performance of the quantized operators in TVM, we benchmark the performance of several popular network models including VGG-19, ResNet-50 and Inception V3.
diff --git a/community.md b/community.md
index 703bbb1..7013cd4 100644
--- a/community.md
+++ b/community.md
@@ -59,7 +59,7 @@ Please reach out are interested working in aspects that are not on the roadmap.
 As a community project, we welcome contributions!
 The package is developed and used by the community.
 
-<a href="https://docs.tvm.ai/contribute" class="link-btn">TVM Contributor Guideline</a>
+<a href="https://tvm.apache.org/docs//contribute" class="link-btn">TVM Contributor Guideline</a>
 
 <br>