You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2020/03/30 18:16:27 UTC

[incubator-tvm-site] branch asf-site updated: Build at Mon Mar 30 11:16:13 PDT 2020

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 575ea41  Build at Mon Mar 30 11:16:13 PDT 2020
575ea41 is described below

commit 575ea41fcd8552e47a90def3092165316b119e1c
Author: tqchen <ti...@gmail.com>
AuthorDate: Mon Mar 30 11:16:14 2020 -0700

    Build at Mon Mar 30 11:16:13 PDT 2020
---
 2018/07/12/vta-release-announcement.html |  2 +-
 2018/08/10/DLPack-Bridge.html            |  2 +-
 2018/10/03/auto-opt-all.html             |  6 +++---
 2019/01/19/Golang.html                   |  4 ++--
 2019/03/18/tvm-apache-announcement.html  |  2 +-
 2019/04/29/opt-cuda-quantized.html       |  8 ++++----
 atom.xml                                 | 26 +++++++++++++-------------
 community.html                           |  2 +-
 rss.xml                                  | 28 ++++++++++++++--------------
 9 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/2018/07/12/vta-release-announcement.html b/2018/07/12/vta-release-announcement.html
index eb259ab..9304549 100644
--- a/2018/07/12/vta-release-announcement.html
+++ b/2018/07/12/vta-release-announcement.html
@@ -289,7 +289,7 @@ This kind of high-level visibility is essential to system designers who want to
 <h2 id="get-started">Get Started!</h2>
 <ul>
   <li>TVM and VTA Github page can be found here: <a href="https://github.com/dmlc/tvm">https://github.com/dmlc/tvm</a>.</li>
-  <li>You can get started with easy to follow <a href="https://docs.tvm.ai/vta/tutorials/index.html">tutorials on programming VTA with TVM</a>.</li>
+  <li>You can get started with easy to follow <a href="https://tvm.apache.org/docs//vta/tutorials/index.html">tutorials on programming VTA with TVM</a>.</li>
   <li>For more technical details on VTA, read our <a href="https://arxiv.org/abs/1807.04188">VTA technical report</a> on ArXiv.</li>
 </ul>
 
diff --git a/2018/08/10/DLPack-Bridge.html b/2018/08/10/DLPack-Bridge.html
index 9383267..9849d29 100644
--- a/2018/08/10/DLPack-Bridge.html
+++ b/2018/08/10/DLPack-Bridge.html
@@ -245,7 +245,7 @@ schedule:</p>
 <p>For brevity, we do not cover TVM’s large collection of scheduling primitives
 that we can use to optimize matrix multiplication. If you wish to make a custom
 GEMM operator run <em>fast</em> on your hardware device, a detailed tutorial can be
-found <a href="https://docs.tvm.ai/tutorials/optimize/opt_gemm.html">here</a>.</p>
+found <a href="https://tvm.apache.org/docs//tutorials/optimize/opt_gemm.html">here</a>.</p>
 
 <p>We then convert the TVM function into one that supports PyTorch tensors:</p>
 <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    <span class="kn">from</span> <span class="nn">tvm.contrib.dlpack</span> <span class="kn">import</span> <span class="n">to_pytorch_func</span>
diff --git a/2018/10/03/auto-opt-all.html b/2018/10/03/auto-opt-all.html
index 6ee0229..87f8122 100644
--- a/2018/10/03/auto-opt-all.html
+++ b/2018/10/03/auto-opt-all.html
@@ -542,9 +542,9 @@ for inference deployment. TVM just provides such a solution.</p>
 
 <h2 id="links">Links</h2>
 <p>[1] benchmark: <a href="https://github.com/dmlc/tvm/tree/master/apps/benchmark">https://github.com/dmlc/tvm/tree/master/apps/benchmark</a><br />
-[2] Tutorial on tuning for ARM CPU: <a href="https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html">https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html</a><br />
-[3] Tutorial on tuning for Mobile GPU: <a href="https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html">https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html</a><br />
-[4] Tutorial on tuning for NVIDIA/AMD GPU: <a href="https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html">https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html</a><br />
+[2] Tutorial on tuning for ARM CPU: <a href="https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html">https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html</a><br />
+[3] Tutorial on tuning for Mobile GPU: <a href="https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html">https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html</a><br />
+[4] Tutorial on tuning for NVIDIA/AMD GPU: <a href="https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html">https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html</a><br />
 [5] Paper about AutoTVM: <a href="https://arxiv.org/abs/1805.08166">Learning to Optimize Tensor Program</a><br />
 [6] Paper about Intel CPU (by AWS contributors) :  <a href="https://arxiv.org/abs/1809.02697">Optimizing CNN Model Inference on CPUs</a></p>
 
diff --git a/2019/01/19/Golang.html b/2019/01/19/Golang.html
index e22416a..cd312b9 100644
--- a/2019/01/19/Golang.html
+++ b/2019/01/19/Golang.html
@@ -176,7 +176,7 @@ deploy deep learning models from a variety of frameworks to a choice of hardware
 
 <p>The TVM import and compilation process generates a graph JSON, a module and a params. Any application that
 integrates the TVM runtime can load these compiled modules and perform inference. A detailed tutorial of module
-import and compilation using TVM can be found at <a href="https://docs.tvm.ai/tutorials/">tutorials</a>.</p>
+import and compilation using TVM can be found at <a href="https://tvm.apache.org/docs//tutorials/">tutorials</a>.</p>
 
 <p>TVM now supports deploying compiled modules through Golang. Golang applications can make use of this
 to deploy the deep learning models through TVM. The scope of this blog is the introduction of <code class="highlighter-rouge">gotvm</code> package,
@@ -206,7 +206,7 @@ Developers can make use of TVM to import and compile deep learning models and ge
 <center> Import, Compile, Integrate and Deploy</center>
 <p></p>
 
-<p>TVM <a href="https://docs.tvm.ai/tutorials/#compile-deep-learning-models">Compile Deep Learning Models</a> tutorials
+<p>TVM <a href="https://tvm.apache.org/docs//tutorials/#compile-deep-learning-models">Compile Deep Learning Models</a> tutorials
 are available to compile models from all frameworks supported by the TVM frontend. This compilation process
 generates the artifacts required to integrate and deploy the model on a target.</p>
 
diff --git a/2019/03/18/tvm-apache-announcement.html b/2019/03/18/tvm-apache-announcement.html
index cc911ba..98e350d 100644
--- a/2019/03/18/tvm-apache-announcement.html
+++ b/2019/03/18/tvm-apache-announcement.html
@@ -176,7 +176,7 @@
 
 <p>We would like to take this chance to thank the Allen School for supporting the SAMPL team that gave birth to the TVM project. We would also like to thank the Halide project which provided the basis for TVM’s loop-level IR and initial code generation. We would like to thank our Apache incubator mentors for introducing the project to Apache and providing useful guidance. Finally, we would like to thank the TVM community and all of the organizations, as listed above, that supported the d [...]
 
-<p>See also the <a href="https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/">Allen School news about the transition here</a>, <a href="https://sampl.cs.washington.edu/tvmconf/#about-tvmconf">TVM conference program slides and recordings</a>, and <a href="https://docs.tvm.ai/contribute/community.html">our community guideline here</a>. Follow us on Twitter: <a href="https://twitter.com/ApacheTVM">@ApacheTVM</a>.</p>
+<p>See also the <a href="https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/">Allen School news about the transition here</a>, <a href="https://sampl.cs.washington.edu/tvmconf/#about-tvmconf">TVM conference program slides and recordings</a>, and <a href="https://tvm.apache.org/docs//contribute/community.html">our community guideline here</a>. Follow us on Twitter: <a href="https://twitter.com/ApacheTVM">@ApacheTVM</a>.</p>
 
     </div>
   </div>
diff --git a/2019/04/29/opt-cuda-quantized.html b/2019/04/29/opt-cuda-quantized.html
index 5301531..40c7157 100644
--- a/2019/04/29/opt-cuda-quantized.html
+++ b/2019/04/29/opt-cuda-quantized.html
@@ -201,7 +201,7 @@ With an efficient dot product operator, we can implement high-level operators su
 This is a typical use case of <code class="highlighter-rouge">dp4a</code>.
 TVM uses tensorization to support calling external intrinsics.
 We do not need to modify the original computation declaration; we use the schedule primitive <code class="highlighter-rouge">tensorize</code> to replace the accumulation with <code class="highlighter-rouge">dp4a</code> tensor intrinsic.
-More details of tensorization can be found in the <a href="https://docs.tvm.ai/tutorials/language/tensorize.html">tutorial</a>.</p>
+More details of tensorization can be found in the <a href="https://tvm.apache.org/docs//tutorials/language/tensorize.html">tutorial</a>.</p>
 
 <h2 id="data-layout-rearrangement">Data Layout Rearrangement</h2>
 <p>One of the challenges in tensorization is that we may need to design special computation logic to adapt to the requirement of tensor intrinsics.
@@ -243,7 +243,7 @@ We also do some manual tiling such as splitting axes by 4 or 16 to facilitate ve
 <p>In quantized 2d convolution, we design a search space that includes a set of tunable options, such as the tile size, the axes to fuse, configurations of loop unrolling and double buffering.
 The templates of quantized <code class="highlighter-rouge">conv2d</code> and <code class="highlighter-rouge">dense</code> on CUDA are registered under template key <code class="highlighter-rouge">int8</code>.
 During automatic tuning, we can create tuning tasks for these quantized operators by setting the <code class="highlighter-rouge">template_key</code> argument.
-Details of how to launch automatic optimization can be found in the <a href="https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html">AutoTVM tutorial</a>.</p>
+Details of how to launch automatic optimization can be found in the <a href="https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html">AutoTVM tutorial</a>.</p>
 
 <h1 id="general-workflow">General Workflow</h1>
 
@@ -262,14 +262,14 @@ Details of how to launch automatic optimization can be found in the <a href="htt
 <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">net</span> <span class="o">=</span> <span class="n">relay</span><span class="o">.</span><span class="n">quantize</span><span class="o">.</span><span class="n">quantize</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="n">params</span><span class="o">=</span><span class="n">params</span><span class="p">)</span>
 </code></pre></div></div>
 
-<p>Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The <a href="https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html">AutoTVM tutorial</a> provides an example for this.</p>
+<p>Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The <a href="https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html">AutoTVM tutorial</a> provides an example for this.</p>
 
 <p>Finally, we build the model and run inference in the quantized mode.</p>
 <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">with</span> <span class="n">relay</span><span class="o">.</span><span class="n">build_config</span><span class="p">(</span><span class="n">opt_level</span><span class="o">=</span><span class="mi">3</span><span class="p">):</span>
     <span class="n">graph</span><span class="p">,</span> <span class="n">lib</span><span class="p">,</span> <span class="n">params</span> <span class="o">=</span> <span class="n">relay</span><span class="o">.</span><span class="n">build</span><span class="p">(</span><span class="n">net</span><span class="p">,</span> <span class="n">target</span><span class="p">)</span>
 </code></pre></div></div>
 <p>The result of <code class="highlighter-rouge">relay.build</code> is a deployable library.
-We can either run inference <a href="https://docs.tvm.ai/tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm">on the GPU</a> directly or deploy <a href="https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc">on the remote devices</a> via RPC.</p>
+We can either run inference <a href="https://tvm.apache.org/docs//tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm">on the GPU</a> directly or deploy <a href="https://tvm.apache.org/docs//tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc">on the remote devices</a> via RPC.</p>
 
 <h1 id="benchmark">Benchmark</h1>
 <p>To verify the performance of the quantized operators in TVM, we benchmark the performance of several popular network models including VGG-19, ResNet-50 and Inception V3.
diff --git a/atom.xml b/atom.xml
index 146647c..4a77194 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
  <title>TVM</title>
  <link href="https://tvm.apache.org" rel="self"/>
  <link href="https://tvm.apache.org"/>
- <updated>2020-03-30T10:20:31-07:00</updated>
+ <updated>2020-03-30T11:16:12-07:00</updated>
  <id>https://tvm.apache.org</id>
  <author>
    <name></name>
@@ -158,7 +158,7 @@ With an efficient dot product operator, we can implement high-level operators su
 This is a typical use case of &lt;code class=&quot;highlighter-rouge&quot;&gt;dp4a&lt;/code&gt;.
 TVM uses tensorization to support calling external intrinsics.
 We do not need to modify the original computation declaration; we use the schedule primitive &lt;code class=&quot;highlighter-rouge&quot;&gt;tensorize&lt;/code&gt; to replace the accumulation with &lt;code class=&quot;highlighter-rouge&quot;&gt;dp4a&lt;/code&gt; tensor intrinsic.
-More details of tensorization can be found in the &lt;a href=&quot;https://docs.tvm.ai/tutorials/language/tensorize.html&quot;&gt;tutorial&lt;/a&gt;.&lt;/p&gt;
+More details of tensorization can be found in the &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/language/tensorize.html&quot;&gt;tutorial&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h2 id=&quot;data-layout-rearrangement&quot;&gt;Data Layout Rearrangement&lt;/h2&gt;
 &lt;p&gt;One of the challenges in tensorization is that we may need to design special computation logic to adapt to the requirement of tensor intrinsics.
@@ -200,7 +200,7 @@ We also do some manual tiling such as splitting axes by 4 or 16 to facilitate ve
 &lt;p&gt;In quantized 2d convolution, we design a search space that includes a set of tunable options, such as the tile size, the axes to fuse, configurations of loop unrolling and double buffering.
 The templates of quantized &lt;code class=&quot;highlighter-rouge&quot;&gt;conv2d&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;dense&lt;/code&gt; on CUDA are registered under template key &lt;code class=&quot;highlighter-rouge&quot;&gt;int8&lt;/code&gt;.
 During automatic tuning, we can create tuning tasks for these quantized operators by setting the &lt;code class=&quot;highlighter-rouge&quot;&gt;template_key&lt;/code&gt; argument.
-Details of how to launch automatic optimization can be found in the &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt;.&lt;/p&gt;
+Details of how to launch automatic optimization can be found in the &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h1 id=&quot;general-workflow&quot;&gt;General Workflow&lt;/h1&gt;
 
@@ -219,14 +219,14 @@ Details of how to launch automatic optimization can be found in the &lt;a href=&
 &lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;net&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;relay&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;quantize&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;quantize&lt;/spa [...]
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
 
-&lt;p&gt;Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt; provides an example for this.&lt;/p&gt;
+&lt;p&gt;Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt; provides an example for this.&lt;/p&gt;
 
 &lt;p&gt;Finally, we build the model and run inference in the quantized mode.&lt;/p&gt;
 &lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;with&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;relay&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;build_config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;opt_level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt [...]
     &lt;span class=&quot;n&quot;&gt;graph&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;lib&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;params&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;relay&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;s [...]
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
 &lt;p&gt;The result of &lt;code class=&quot;highlighter-rouge&quot;&gt;relay.build&lt;/code&gt; is a deployable library.
-We can either run inference &lt;a href=&quot;https://docs.tvm.ai/tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm&quot;&gt;on the GPU&lt;/a&gt; directly or deploy &lt;a href=&quot;https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc&quot;&gt;on the remote devices&lt;/a&gt; via RPC.&lt;/p&gt;
+We can either run inference &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm&quot;&gt;on the GPU&lt;/a&gt; directly or deploy &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc&quot;&gt;on the remote devices&lt;/a&gt; via RPC.&lt;/p&gt;
 
 &lt;h1 id=&quot;benchmark&quot;&gt;Benchmark&lt;/h1&gt;
 &lt;p&gt;To verify the performance of the quantized operators in TVM, we benchmark the performance of several popular network models including VGG-19, ResNet-50 and Inception V3.
@@ -277,7 +277,7 @@ We show that automatic optimization in TVM makes it easy and flexible to support
 
 &lt;p&gt;We would like to take this chance to thank the Allen School for supporting the SAMPL team that gave birth to the TVM project. We would also like to thank the Halide project which provided the basis for TVM’s loop-level IR and initial code generation. We would like to thank our Apache incubator mentors for introducing the project to Apache and providing useful guidance. Finally, we would like to thank the TVM community and all of the organizations, as listed above, that supported [...]
 
-&lt;p&gt;See also the &lt;a href=&quot;https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/&quot;&gt;Allen School news about the transition here&lt;/a&gt;, &lt;a href=&quot;https://sampl.cs.washington.edu/tvmconf/#about-tvmconf&quot;&gt;TVM conference program slides and recordings&lt;/a&gt;, and &lt;a href=&quot;https://docs.tvm.ai/contribute/community.html&quot;&gt;our community guideline here&lt;/a&gt;. Follow us on Twitter [...]
+&lt;p&gt;See also the &lt;a href=&quot;https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/&quot;&gt;Allen School news about the transition here&lt;/a&gt;, &lt;a href=&quot;https://sampl.cs.washington.edu/tvmconf/#about-tvmconf&quot;&gt;TVM conference program slides and recordings&lt;/a&gt;, and &lt;a href=&quot;https://tvm.apache.org/docs//contribute/community.html&quot;&gt;our community guideline here&lt;/a&gt;. Follow us o [...]
 </content>
  </entry>
  
@@ -300,7 +300,7 @@ deploy deep learning models from a variety of frameworks to a choice of hardware
 
 &lt;p&gt;The TVM import and compilation process generates a graph JSON, a module and a params. Any application that
 integrates the TVM runtime can load these compiled modules and perform inference. A detailed tutorial of module
-import and compilation using TVM can be found at &lt;a href=&quot;https://docs.tvm.ai/tutorials/&quot;&gt;tutorials&lt;/a&gt;.&lt;/p&gt;
+import and compilation using TVM can be found at &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/&quot;&gt;tutorials&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;TVM now supports deploying compiled modules through Golang. Golang applications can make use of this
 to deploy the deep learning models through TVM. The scope of this blog is the introduction of &lt;code class=&quot;highlighter-rouge&quot;&gt;gotvm&lt;/code&gt; package,
@@ -330,7 +330,7 @@ Developers can make use of TVM to import and compile deep learning models and ge
 &lt;center&gt; Import, Compile, Integrate and Deploy&lt;/center&gt;
 &lt;p&gt;&lt;/p&gt;
 
-&lt;p&gt;TVM &lt;a href=&quot;https://docs.tvm.ai/tutorials/#compile-deep-learning-models&quot;&gt;Compile Deep Learning Models&lt;/a&gt; tutorials
+&lt;p&gt;TVM &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/#compile-deep-learning-models&quot;&gt;Compile Deep Learning Models&lt;/a&gt; tutorials
 are available to compile models from all frameworks supported by the TVM frontend. This compilation process
 generates the artifacts required to integrate and deploy the model on a target.&lt;/p&gt;
 
@@ -1113,9 +1113,9 @@ for inference deployment. TVM just provides such a solution.&lt;/p&gt;
 
 &lt;h2 id=&quot;links&quot;&gt;Links&lt;/h2&gt;
 &lt;p&gt;[1] benchmark: &lt;a href=&quot;https://github.com/dmlc/tvm/tree/master/apps/benchmark&quot;&gt;https://github.com/dmlc/tvm/tree/master/apps/benchmark&lt;/a&gt;&lt;br /&gt;
-[2] Tutorial on tuning for ARM CPU: &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html&quot;&gt;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html&lt;/a&gt;&lt;br /&gt;
-[3] Tutorial on tuning for Mobile GPU: &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html&quot;&gt;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html&lt;/a&gt;&lt;br /&gt;
-[4] Tutorial on tuning for NVIDIA/AMD GPU: &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html&quot;&gt;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html&lt;/a&gt;&lt;br /&gt;
+[2] Tutorial on tuning for ARM CPU: &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html&quot;&gt;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html&lt;/a&gt;&lt;br /&gt;
+[3] Tutorial on tuning for Mobile GPU: &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html&quot;&gt;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html&lt;/a&gt;&lt;br /&gt;
+[4] Tutorial on tuning for NVIDIA/AMD GPU: &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html&quot;&gt;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html&lt;/a&gt;&lt;br /&gt;
 [5] Paper about AutoTVM: &lt;a href=&quot;https://arxiv.org/abs/1805.08166&quot;&gt;Learning to Optimize Tensor Program&lt;/a&gt;&lt;br /&gt;
 [6] Paper about Intel CPU (by AWS contributors) :  &lt;a href=&quot;https://arxiv.org/abs/1809.02697&quot;&gt;Optimizing CNN Model Inference on CPUs&lt;/a&gt;&lt;/p&gt;
 
@@ -1210,7 +1210,7 @@ schedule:&lt;/p&gt;
 &lt;p&gt;For brevity, we do not cover TVM’s large collection of scheduling primitives
 that we can use to optimize matrix multiplication. If you wish to make a custom
 GEMM operator run &lt;em&gt;fast&lt;/em&gt; on your hardware device, a detailed tutorial can be
-found &lt;a href=&quot;https://docs.tvm.ai/tutorials/optimize/opt_gemm.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+found &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/optimize/opt_gemm.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;We then convert the TVM function into one that supports PyTorch tensors:&lt;/p&gt;
 &lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    &lt;span class=&quot;kn&quot;&gt;from&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;tvm.contrib.dlpack&lt;/span&gt; &lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;to_pytorch_func&lt;/span&gt;
@@ -1397,7 +1397,7 @@ This kind of high-level visibility is essential to system designers who want to
 &lt;h2 id=&quot;get-started&quot;&gt;Get Started!&lt;/h2&gt;
 &lt;ul&gt;
   &lt;li&gt;TVM and VTA Github page can be found here: &lt;a href=&quot;https://github.com/dmlc/tvm&quot;&gt;https://github.com/dmlc/tvm&lt;/a&gt;.&lt;/li&gt;
-  &lt;li&gt;You can get started with easy to follow &lt;a href=&quot;https://docs.tvm.ai/vta/tutorials/index.html&quot;&gt;tutorials on programming VTA with TVM&lt;/a&gt;.&lt;/li&gt;
+  &lt;li&gt;You can get started with easy to follow &lt;a href=&quot;https://tvm.apache.org/docs//vta/tutorials/index.html&quot;&gt;tutorials on programming VTA with TVM&lt;/a&gt;.&lt;/li&gt;
   &lt;li&gt;For more technical details on VTA, read our &lt;a href=&quot;https://arxiv.org/abs/1807.04188&quot;&gt;VTA technical report&lt;/a&gt; on ArXiv.&lt;/li&gt;
 &lt;/ul&gt;
 </content>
diff --git a/community.html b/community.html
index b88e61f..3cc31b9 100644
--- a/community.html
+++ b/community.html
@@ -200,7 +200,7 @@ Please reach out are interested working in aspects that are not on the roadmap.<
 <p>As a community project, we welcome contributions!
 The package is developed and used by the community.</p>
 
-<p><a href="https://docs.tvm.ai/contribute" class="link-btn">TVM Contributor Guideline</a></p>
+<p><a href="https://tvm.apache.org/docs//contribute" class="link-btn">TVM Contributor Guideline</a></p>
 
 <p><br /></p>
 
diff --git a/rss.xml b/rss.xml
index 00804bd..967dd59 100644
--- a/rss.xml
+++ b/rss.xml
@@ -5,8 +5,8 @@
         <description>TVM - </description>
         <link>https://tvm.apache.org</link>
         <atom:link href="https://tvm.apache.org" rel="self" type="application/rss+xml" />
-        <lastBuildDate>Mon, 30 Mar 2020 10:20:31 -0700</lastBuildDate>
-        <pubDate>Mon, 30 Mar 2020 10:20:31 -0700</pubDate>
+        <lastBuildDate>Mon, 30 Mar 2020 11:16:12 -0700</lastBuildDate>
+        <pubDate>Mon, 30 Mar 2020 11:16:12 -0700</pubDate>
         <ttl>60</ttl>
 
 
@@ -153,7 +153,7 @@ With an efficient dot product operator, we can implement high-level operators su
 This is a typical use case of &lt;code class=&quot;highlighter-rouge&quot;&gt;dp4a&lt;/code&gt;.
 TVM uses tensorization to support calling external intrinsics.
 We do not need to modify the original computation declaration; we use the schedule primitive &lt;code class=&quot;highlighter-rouge&quot;&gt;tensorize&lt;/code&gt; to replace the accumulation with &lt;code class=&quot;highlighter-rouge&quot;&gt;dp4a&lt;/code&gt; tensor intrinsic.
-More details of tensorization can be found in the &lt;a href=&quot;https://docs.tvm.ai/tutorials/language/tensorize.html&quot;&gt;tutorial&lt;/a&gt;.&lt;/p&gt;
+More details of tensorization can be found in the &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/language/tensorize.html&quot;&gt;tutorial&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h2 id=&quot;data-layout-rearrangement&quot;&gt;Data Layout Rearrangement&lt;/h2&gt;
 &lt;p&gt;One of the challenges in tensorization is that we may need to design special computation logic to adapt to the requirement of tensor intrinsics.
@@ -195,7 +195,7 @@ We also do some manual tiling such as splitting axes by 4 or 16 to facilitate ve
 &lt;p&gt;In quantized 2d convolution, we design a search space that includes a set of tunable options, such as the tile size, the axes to fuse, configurations of loop unrolling and double buffering.
 The templates of quantized &lt;code class=&quot;highlighter-rouge&quot;&gt;conv2d&lt;/code&gt; and &lt;code class=&quot;highlighter-rouge&quot;&gt;dense&lt;/code&gt; on CUDA are registered under template key &lt;code class=&quot;highlighter-rouge&quot;&gt;int8&lt;/code&gt;.
 During automatic tuning, we can create tuning tasks for these quantized operators by setting the &lt;code class=&quot;highlighter-rouge&quot;&gt;template_key&lt;/code&gt; argument.
-Details of how to launch automatic optimization can be found in the &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt;.&lt;/p&gt;
+Details of how to launch automatic optimization can be found in the &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h1 id=&quot;general-workflow&quot;&gt;General Workflow&lt;/h1&gt;
 
@@ -214,14 +214,14 @@ Details of how to launch automatic optimization can be found in the &lt;a href=&
 &lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;n&quot;&gt;net&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;relay&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;quantize&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;quantize&lt;/spa [...]
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
 
-&lt;p&gt;Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt; provides an example for this.&lt;/p&gt;
+&lt;p&gt;Then, we use AutoTVM to extract tuning tasks for the operators in the model and perform automatic optimization. The &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_relay_cuda.html&quot;&gt;AutoTVM tutorial&lt;/a&gt; provides an example for this.&lt;/p&gt;
 
 &lt;p&gt;Finally, we build the model and run inference in the quantized mode.&lt;/p&gt;
 &lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;k&quot;&gt;with&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;relay&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;build_config&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;opt_level&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt [...]
     &lt;span class=&quot;n&quot;&gt;graph&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;lib&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;params&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;relay&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;s [...]
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
 &lt;p&gt;The result of &lt;code class=&quot;highlighter-rouge&quot;&gt;relay.build&lt;/code&gt; is a deployable library.
-We can either run inference &lt;a href=&quot;https://docs.tvm.ai/tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm&quot;&gt;on the GPU&lt;/a&gt; directly or deploy &lt;a href=&quot;https://docs.tvm.ai/tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc&quot;&gt;on the remote devices&lt;/a&gt; via RPC.&lt;/p&gt;
+We can either run inference &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/frontend/from_mxnet.html#execute-the-portable-graph-on-tvm&quot;&gt;on the GPU&lt;/a&gt; directly or deploy &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/frontend/deploy_model_on_rasp.html#deploy-the-model-remotely-by-rpc&quot;&gt;on the remote devices&lt;/a&gt; via RPC.&lt;/p&gt;
 
 &lt;h1 id=&quot;benchmark&quot;&gt;Benchmark&lt;/h1&gt;
 &lt;p&gt;To verify the performance of the quantized operators in TVM, we benchmark the performance of several popular network models including VGG-19, ResNet-50 and Inception V3.
@@ -272,7 +272,7 @@ We show that automatic optimization in TVM makes it easy and flexible to support
 
 &lt;p&gt;We would like to take this chance to thank the Allen School for supporting the SAMPL team that gave birth to the TVM project. We would also like to thank the Halide project which provided the basis for TVM’s loop-level IR and initial code generation. We would like to thank our Apache incubator mentors for introducing the project to Apache and providing useful guidance. Finally, we would like to thank the TVM community and all of the organizations, as listed above, that supported [...]
 
-&lt;p&gt;See also the &lt;a href=&quot;https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/&quot;&gt;Allen School news about the transition here&lt;/a&gt;, &lt;a href=&quot;https://sampl.cs.washington.edu/tvmconf/#about-tvmconf&quot;&gt;TVM conference program slides and recordings&lt;/a&gt;, and &lt;a href=&quot;https://docs.tvm.ai/contribute/community.html&quot;&gt;our community guideline here&lt;/a&gt;. Follow us on Twitter [...]
+&lt;p&gt;See also the &lt;a href=&quot;https://news.cs.washington.edu/2019/03/18/allen-schools-tvm-deep-learning-compiler-framework-transitions-to-apache/&quot;&gt;Allen School news about the transition here&lt;/a&gt;, &lt;a href=&quot;https://sampl.cs.washington.edu/tvmconf/#about-tvmconf&quot;&gt;TVM conference program slides and recordings&lt;/a&gt;, and &lt;a href=&quot;https://tvm.apache.org/docs//contribute/community.html&quot;&gt;our community guideline here&lt;/a&gt;. Follow us o [...]
 </description>
                 <link>https://tvm.apache.org/2019/03/18/tvm-apache-announcement</link>
                 <guid>https://tvm.apache.org/2019/03/18/tvm-apache-announcement</guid>
@@ -295,7 +295,7 @@ deploy deep learning models from a variety of frameworks to a choice of hardware
 
 &lt;p&gt;The TVM import and compilation process generates a graph JSON, a module and a params. Any application that
 integrates the TVM runtime can load these compiled modules and perform inference. A detailed tutorial of module
-import and compilation using TVM can be found at &lt;a href=&quot;https://docs.tvm.ai/tutorials/&quot;&gt;tutorials&lt;/a&gt;.&lt;/p&gt;
+import and compilation using TVM can be found at &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/&quot;&gt;tutorials&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;TVM now supports deploying compiled modules through Golang. Golang applications can make use of this
 to deploy the deep learning models through TVM. The scope of this blog is the introduction of &lt;code class=&quot;highlighter-rouge&quot;&gt;gotvm&lt;/code&gt; package,
@@ -325,7 +325,7 @@ Developers can make use of TVM to import and compile deep learning models and ge
 &lt;center&gt; Import, Compile, Integrate and Deploy&lt;/center&gt;
 &lt;p&gt;&lt;/p&gt;
 
-&lt;p&gt;TVM &lt;a href=&quot;https://docs.tvm.ai/tutorials/#compile-deep-learning-models&quot;&gt;Compile Deep Learning Models&lt;/a&gt; tutorials
+&lt;p&gt;TVM &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/#compile-deep-learning-models&quot;&gt;Compile Deep Learning Models&lt;/a&gt; tutorials
 are available to compile models from all frameworks supported by the TVM frontend. This compilation process
 generates the artifacts required to integrate and deploy the model on a target.&lt;/p&gt;
 
@@ -1108,9 +1108,9 @@ for inference deployment. TVM just provides such a solution.&lt;/p&gt;
 
 &lt;h2 id=&quot;links&quot;&gt;Links&lt;/h2&gt;
 &lt;p&gt;[1] benchmark: &lt;a href=&quot;https://github.com/dmlc/tvm/tree/master/apps/benchmark&quot;&gt;https://github.com/dmlc/tvm/tree/master/apps/benchmark&lt;/a&gt;&lt;br /&gt;
-[2] Tutorial on tuning for ARM CPU: &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html&quot;&gt;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_arm.html&lt;/a&gt;&lt;br /&gt;
-[3] Tutorial on tuning for Mobile GPU: &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html&quot;&gt;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_mobile_gpu.html&lt;/a&gt;&lt;br /&gt;
-[4] Tutorial on tuning for NVIDIA/AMD GPU: &lt;a href=&quot;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html&quot;&gt;https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html&lt;/a&gt;&lt;br /&gt;
+[2] Tutorial on tuning for ARM CPU: &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html&quot;&gt;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_arm.html&lt;/a&gt;&lt;br /&gt;
+[3] Tutorial on tuning for Mobile GPU: &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html&quot;&gt;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_mobile_gpu.html&lt;/a&gt;&lt;br /&gt;
+[4] Tutorial on tuning for NVIDIA/AMD GPU: &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html&quot;&gt;https://tvm.apache.org/docs//tutorials/autotvm/tune_nnvm_cuda.html&lt;/a&gt;&lt;br /&gt;
 [5] Paper about AutoTVM: &lt;a href=&quot;https://arxiv.org/abs/1805.08166&quot;&gt;Learning to Optimize Tensor Program&lt;/a&gt;&lt;br /&gt;
 [6] Paper about Intel CPU (by AWS contributors) :  &lt;a href=&quot;https://arxiv.org/abs/1809.02697&quot;&gt;Optimizing CNN Model Inference on CPUs&lt;/a&gt;&lt;/p&gt;
 
@@ -1205,7 +1205,7 @@ schedule:&lt;/p&gt;
 &lt;p&gt;For brevity, we do not cover TVM’s large collection of scheduling primitives
 that we can use to optimize matrix multiplication. If you wish to make a custom
 GEMM operator run &lt;em&gt;fast&lt;/em&gt; on your hardware device, a detailed tutorial can be
-found &lt;a href=&quot;https://docs.tvm.ai/tutorials/optimize/opt_gemm.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+found &lt;a href=&quot;https://tvm.apache.org/docs//tutorials/optimize/opt_gemm.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;We then convert the TVM function into one that supports PyTorch tensors:&lt;/p&gt;
 &lt;div class=&quot;language-python highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    &lt;span class=&quot;kn&quot;&gt;from&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;tvm.contrib.dlpack&lt;/span&gt; &lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;to_pytorch_func&lt;/span&gt;
@@ -1392,7 +1392,7 @@ This kind of high-level visibility is essential to system designers who want to
 &lt;h2 id=&quot;get-started&quot;&gt;Get Started!&lt;/h2&gt;
 &lt;ul&gt;
   &lt;li&gt;TVM and VTA Github page can be found here: &lt;a href=&quot;https://github.com/dmlc/tvm&quot;&gt;https://github.com/dmlc/tvm&lt;/a&gt;.&lt;/li&gt;
-  &lt;li&gt;You can get started with easy to follow &lt;a href=&quot;https://docs.tvm.ai/vta/tutorials/index.html&quot;&gt;tutorials on programming VTA with TVM&lt;/a&gt;.&lt;/li&gt;
+  &lt;li&gt;You can get started with easy to follow &lt;a href=&quot;https://tvm.apache.org/docs//vta/tutorials/index.html&quot;&gt;tutorials on programming VTA with TVM&lt;/a&gt;.&lt;/li&gt;
   &lt;li&gt;For more technical details on VTA, read our &lt;a href=&quot;https://arxiv.org/abs/1807.04188&quot;&gt;VTA technical report&lt;/a&gt; on ArXiv.&lt;/li&gt;
 &lt;/ul&gt;
 </description>