You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by sy...@apache.org on 2022/04/27 03:58:52 UTC

[tvm] branch main updated: [docs] Update publication list (#11137)

This is an automated email from the ASF dual-hosted git repository.

syfeng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
     new c2803f6a06 [docs] Update publication list (#11137)
c2803f6a06 is described below

commit c2803f6a06868ba4bd662c515e3927da5f99c047
Author: Zhi <51...@users.noreply.github.com>
AuthorDate: Wed Apr 27 11:58:46 2022 +0800

    [docs] Update publication list (#11137)
    
    * [docs] Update publication list
    
    This PR updates some publications that use or built on top of TVM.
    
    * Fix CI
---
 docs/reference/publications.rst | 65 +++++++++++++++++++++++++++++++++++++----
 1 file changed, 59 insertions(+), 6 deletions(-)

diff --git a/docs/reference/publications.rst b/docs/reference/publications.rst
index 3a90a3ad3c..2fbcd52294 100644
--- a/docs/reference/publications.rst
+++ b/docs/reference/publications.rst
@@ -22,10 +22,63 @@ TVM is developed as part of peer-reviewed research in machine learning compiler
 framework for CPUs, GPUs, and machine learning accelerators.
 
 This document includes references to publications describing the research,
-results, and design underlying TVM.
+results, and design that use or built on top of TVM.
 
-* `TVM: An Automated End-to-End Optimizing Compiler for Deep Learning <https://arxiv.org/abs/1802.04799>`_
-* `Learning to Optimize Tensor Programs <https://arxiv.org/pdf/1805.08166.pdf>`_
-* `Ansor: Generating High-Performance Tensor Programs for Deep Learning <https://arxiv.org/abs/2006.06762>`_
-* `Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference
-  <https://arxiv.org/abs/2006.03031>`_
+2018
+
+* `TVM: An Automated End-to-End Optimizing Compiler for Deep Learning`__, [Slides_]
+
+.. __: https://arxiv.org/abs/1802.04799
+.. _Slides: https://www.usenix.org/system/files/osdi18-chen.pdf
+
+* `Learning to Optimize Tensor Programs`__, [Slides]
+
+.. __: https://arxiv.org/pdf/1805.08166.pdf
+
+2020
+
+* `Ansor: Generating High-Performance Tensor Programs for Deep Learning`__, [Slides__] [Tutorial__]
+
+.. __: https://arxiv.org/abs/2006.06762
+.. __: https://www.usenix.org/sites/default/files/conference/protected-files/osdi20_slides_zheng.pdf
+.. __: https://tvm.apache.org/2021/03/03/intro-auto-scheduler
+
+2021
+
+* `Nimble: Efficiently Compiling Dynamic Neural Networks for Model Inference`__, [Slides__]
+
+.. __: https://arxiv.org/abs/2006.03031
+.. __: https://shenhaichen.com/slides/nimble_mlsys.pdf
+
+* `Cortex: A Compiler for Recursive Deep Learning Models`__, [Slides__]
+
+.. __: https://arxiv.org/pdf/2011.01383.pdf
+.. __: https://mlsys.org/media/mlsys-2021/Slides/1507.pdf
+
+* `UNIT: Unifying Tensorized Instruction Compilation`__, [Slides]
+
+.. __: https://arxiv.org/abs/2101.08458
+
+* `Lorien: Efficient Deep Learning Workloads Delivery`__, [Slides]
+
+.. __: https://assets.amazon.science/c2/46/2481c9064a8bbaebcf389dd5ad75/lorien-efficient-deep-learning-workloads-delivery.pdf
+
+
+* `Bring Your Own Codegen to Deep Learning Compiler`__, [Slides] [Tutorial__]
+
+.. __: https://arxiv.org/abs/2105.03215
+.. __: https://tvm.apache.org/2020/07/15/how-to-bring-your-own-codegen-to-tvm
+
+2022
+
+* `DietCode: Automatic optimization for dynamic tensor program`__, [Slides]
+
+.. __: https://proceedings.mlsys.org/paper/2022/file/fa7cdfad1a5aaf8370ebeda47a1ff1c3-Paper.pdf
+
+* `Bolt: Bridging the Gap between Auto-tuners and Hardware-native Performance`__, [Slides]
+
+.. __: https://proceedings.mlsys.org/paper/2022/file/38b3eff8baf56627478ec76a704e9b52-Paper.pdf
+
+* `The CoRa Tensor Compiler: Compilation for Ragged Tensors with Minimal Padding`__, [Slides]
+
+.. __: https://arxiv.org/abs/2110.10221