You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by tq...@apache.org on 2020/11/02 13:45:44 UTC

[incubator-tvm] branch v0.7 updated: Update stale link (#6820)

This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch v0.7
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/v0.7 by this push:
     new 766a203  Update stale link (#6820)
766a203 is described below

commit 766a20333fe23810031fb8dd77813787c8eeb257
Author: Tianqi Chen <tq...@users.noreply.github.com>
AuthorDate: Mon Nov 2 08:45:27 2020 -0500

    Update stale link (#6820)
---
 docs/vta/dev/hardware.rst                       | 12 ++++++------
 docs/vta/dev/index.rst                          |  2 +-
 docs/vta/install.rst                            |  2 +-
 vta/python/vta/bitstream.py                     |  2 +-
 vta/tutorials/frontend/deploy_classification.py |  2 +-
 vta/tutorials/matrix_multiply.py                |  6 +++---
 vta/tutorials/optimize/convolution_opt.py       |  6 +++---
 vta/tutorials/optimize/matrix_multiply_opt.py   |  4 ++--
 vta/tutorials/vta_get_started.py                |  2 +-
 9 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/docs/vta/dev/hardware.rst b/docs/vta/dev/hardware.rst
index 4d06826..3253ac0 100644
--- a/docs/vta/dev/hardware.rst
+++ b/docs/vta/dev/hardware.rst
@@ -36,7 +36,7 @@ In addition the design adopts decoupled access-execute to hide memory access lat
 
 To a broader extent, VTA can serve as a template deep learning accelerator design for full stack optimization, exposing a generic tensor computation interface to the compiler stack.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/blogpost/vta_overview.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/blogpost/vta_overview.png
    :align: center
    :width: 80%
 
@@ -175,7 +175,7 @@ Finally, the ``STORE`` instructions are executed by the store module exclusively
 The fields of each instruction is described in the figure below.
 The meaning of each field will be further explained in the :ref:`vta-uarch` section.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/vta_instructions.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/developer/vta_instructions.png
    :align: center
    :width: 100%
 
@@ -191,7 +191,7 @@ VTA relies on dependence FIFO queues between hardware modules to synchronize the
 The figure below shows how a given hardware module can execute concurrently from its producer and consumer modules in a dataflow fashion through the use of dependence FIFO queues, and single-reader/single-writer SRAM buffers.
 Each module is connected to its consumer and producer via read-after-write (RAW) and write-after-read (WAR) dependence queues.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/dataflow.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/developer/dataflow.png
    :align: center
    :width: 100%
 
@@ -258,7 +258,7 @@ There are two types of compute micro-ops: ALU and GEMM operations.
 To minimize the footprint of micro-op kernels, while avoiding the need for control-flow instructions such as conditional jumps, the compute module executes micro-op sequences inside a two-level nested loop that computes the location of each tensor register location via an affine function.
 This compression approach helps reduce the micro-kernel instruction footprint, and applies to both matrix multiplication and 2D convolution, commonly found in neural network operators.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/gemm_core.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/developer/gemm_core.png
    :align: center
    :width: 100%
 
@@ -269,7 +269,7 @@ This tensorization intrinsic is defined by the dimensions of the input, weight a
 Each data type can have a different integer precision: typically both weight and input types are low-precision (8-bits or less), while the accumulator tensor has a wider type to prevent overflows (32-bits).
 In order to keep the GEMM core busy, each of the input buffer, weight buffer, and register file have to expose sufficient read/write bandwidth.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/alu_core.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/developer/alu_core.png
    :align: center
    :width: 100%
 
@@ -289,7 +289,7 @@ The micro-code in the context of tensor ALU computation only takes care of speci
 Load and Store Modules
 ~~~~~~~~~~~~~~~~~~~~~~
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/developer/2d_dma.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/developer/2d_dma.png
    :align: center
    :width: 100%
 
diff --git a/docs/vta/dev/index.rst b/docs/vta/dev/index.rst
index 0ba3bd1..0536f7d 100644
--- a/docs/vta/dev/index.rst
+++ b/docs/vta/dev/index.rst
@@ -20,7 +20,7 @@ VTA Design and Developer Guide
 
 This developer guide details the complete VTA-TVM hardware-software stack.
 
-.. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/blogpost/vta_stack.png
+.. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/blogpost/vta_stack.png
    :align: center
    :width: 60%
 
diff --git a/docs/vta/install.rst b/docs/vta/install.rst
index e47f84d..835fb2a 100644
--- a/docs/vta/install.rst
+++ b/docs/vta/install.rst
@@ -202,7 +202,7 @@ This time again, we will run the 2D convolution testbench.
 Beforehand, we need to program the Pynq board FPGA with a VTA bitstream, and build the VTA runtime via RPC.
 The following ``test_program_rpc.py`` script will perform two operations:
 
-* FPGA programming, by downloading a pre-compiled bitstream from a `VTA bitstream repository <https://github.com/uwsaml/vta-distro>`_ that matches the default ``vta_config.json`` configuration set by the host, and sending it over to the Pynq via RPC to program the Pynq's FPGA.
+* FPGA programming, by downloading a pre-compiled bitstream from a `VTA bitstream repository <https://github.com/uwsampl/vta-distro>`_ that matches the default ``vta_config.json`` configuration set by the host, and sending it over to the Pynq via RPC to program the Pynq's FPGA.
 * Runtime building on the Pynq, which needs to be run every time the ``vta_config.json`` configuration is modified. This ensures that the VTA software runtime that generates the accelerator's executable via just-in-time (JIT) compilation matches the specifications of the VTA design that is programmed on the FPGA. The build process takes about 30 seconds to complete so be patient!
 
 .. code:: bash
diff --git a/vta/python/vta/bitstream.py b/vta/python/vta/bitstream.py
index 254243d..3f70640 100644
--- a/vta/python/vta/bitstream.py
+++ b/vta/python/vta/bitstream.py
@@ -29,7 +29,7 @@ else:
     import urllib2
 
 # bitstream repo
-BITSTREAM_URL = "https://github.com/uwsaml/vta-distro/raw/master/bitstreams/"
+BITSTREAM_URL = "https://github.com/uwsampl/vta-distro/raw/master/bitstreams/"
 
 
 def get_bitstream_path():
diff --git a/vta/tutorials/frontend/deploy_classification.py b/vta/tutorials/frontend/deploy_classification.py
index 04716ce..963e5f0 100644
--- a/vta/tutorials/frontend/deploy_classification.py
+++ b/vta/tutorials/frontend/deploy_classification.py
@@ -220,7 +220,7 @@ with autotvm.tophub.context(target):
 # and an input test image.
 
 # Download ImageNet categories
-categ_url = "https://github.com/uwsaml/web-data/raw/master/vta/models/"
+categ_url = "https://github.com/uwsampl/web-data/raw/master/vta/models/"
 categ_fn = "synset.txt"
 download.download(join(categ_url, categ_fn), categ_fn)
 synset = eval(open(categ_fn).read())
diff --git a/vta/tutorials/matrix_multiply.py b/vta/tutorials/matrix_multiply.py
index 77fc805..9e8f0cb 100644
--- a/vta/tutorials/matrix_multiply.py
+++ b/vta/tutorials/matrix_multiply.py
@@ -86,7 +86,7 @@ elif env.TARGET in ["sim", "tsim"]:
 # The last operation is a cast and copy back to DRAM, into results tensor
 # :code:`C`.
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/gemm_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/gemm_dataflow.png
 #      :align: center
 
 ######################################################################
@@ -107,7 +107,7 @@ elif env.TARGET in ["sim", "tsim"]:
 #   adding the result matrix to an accumulator matrix, as shown in the
 #   figure below.
 #
-#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/tensor_core.png
+#   .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/tensor_core.png
 #        :align: center
 #        :width: 480px
 #
@@ -126,7 +126,7 @@ elif env.TARGET in ["sim", "tsim"]:
 #   contiguous.
 #   The resulting tiled tensor has a shape of (2, 4, 2, 2).
 #
-#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/data_tiling.png
+#   .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/data_tiling.png
 #        :align: center
 #        :width: 480px
 #
diff --git a/vta/tutorials/optimize/convolution_opt.py b/vta/tutorials/optimize/convolution_opt.py
index 3f079e8..d094040 100644
--- a/vta/tutorials/optimize/convolution_opt.py
+++ b/vta/tutorials/optimize/convolution_opt.py
@@ -93,7 +93,7 @@ elif env.TARGET in ["sim", "tsim"]:
 # convolution followed by a rectified linear activation.
 # We describe the TVM dataflow graph of the 2D convolution layer below:
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/conv2d_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/conv2d_dataflow.png
 #      :align: center
 #
 # This computation is intentionally too large to fit onto VTA's on-chip
@@ -120,7 +120,7 @@ elif env.TARGET in ["sim", "tsim"]:
 #   loaded from DRAM into VTA's SRAM, following a 2D strided and padded memory
 #   read.
 #
-#   .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/padding.png
+#   .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/padding.png
 #        :align: center
 #        :width: 480px
 
@@ -292,7 +292,7 @@ s[res_conv].reorder(ic_out, b_inn, oc_inn, y_inn, ic_inn, dy, dx, x_inn, b_tns,
 # We show how work is split when computing the 2D convolution in the figure
 # below.
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/virtual_threading.png
+# .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/virtual_threading.png
 #      :align: center
 #      :width: 480px
 
diff --git a/vta/tutorials/optimize/matrix_multiply_opt.py b/vta/tutorials/optimize/matrix_multiply_opt.py
index 28600d4..e3bb504 100644
--- a/vta/tutorials/optimize/matrix_multiply_opt.py
+++ b/vta/tutorials/optimize/matrix_multiply_opt.py
@@ -88,7 +88,7 @@ elif env.TARGET in ["sim", "tsim"]:
 # matrix multiplication followed by a rectified linear activation.
 # We describe the TVM dataflow graph of the fully connected layer below:
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/fc_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/fc_dataflow.png
 #      :align: center
 #
 # This computation is intentionally too large to fit onto VTA's on-chip
@@ -183,7 +183,7 @@ print(tvm.lower(s, [data, weight, res], simple_mode=True))
 # We show the outcome of blocking on the computation schedule in the diagram
 # below:
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/blocking.png
+# .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/blocking.png
 #      :align: center
 #      :width: 480px
 #
diff --git a/vta/tutorials/vta_get_started.py b/vta/tutorials/vta_get_started.py
index 46b050f..0700866 100644
--- a/vta/tutorials/vta_get_started.py
+++ b/vta/tutorials/vta_get_started.py
@@ -115,7 +115,7 @@ elif env.TARGET == "sim":
 # The last operation is a cast and copy back to DRAM, into results tensor
 # :code:`C`.
 #
-# .. image:: https://raw.githubusercontent.com/uwsaml/web-data/master/vta/tutorial/vadd_dataflow.png
+# .. image:: https://raw.githubusercontent.com/uwsampl/web-data/master/vta/tutorial/vadd_dataflow.png
 #      :align: center
 
 ######################################################################