You are viewing a plain text version of this content. The canonical link for it is here.
Posted to general@incubator.apache.org by Markus Weimer <we...@apache.org> on 2019/02/15 18:42:32 UTC

[Proposal] Apache TVM

Hi,

we'd like to start the discussion of accepting TVM into the incubator.
Please see the proposal below. I'd like to highlight a few things for
our discussion:

(1) The project already follows many Apache ways like meritocracy,
open development and such.

(2) The project recognizes an in-between state of "reviewer" that it
nominates people for between contributor and committer status. We'd
like to learn if and how to maintain that in the future.

(3) The project contains hardware as a software artifact. We are not
aware of another ASF project like that and wonder if and how it
affects its acceptance into the incubator.

Thanks!

Markus

=== Proposal ===

We propose to incubate the TVM project the Apache Software Foundation. TVM is a
full stack open deep learning compiler stack for CPUs, GPUs, and specialized
accelerators. It aims to close the gap between the productivity-focused deep
learning frameworks, and the performance- or efficiency-oriented hardware
backends.

=== Background ===

There is an increasing need to bring machine learning to a wide diversity of
hardware devices. Current frameworks rely on vendor-specific operator libraries
and optimize for a narrow range of server-class GPUs. Deploying workloads to new
platforms -- such as mobile phones, embedded devices, and accelerators (e.g.,
FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end deep
learning a compiler that exposes graph-level and operator-level optimizations to
provide performance portability to deep learning workloads across diverse
hardware back-ends. TVM solves optimization challenges specific to deep
learning, such as high-level operator fusion, mapping to arbitrary hardware
primitives, and memory latency hiding. It also automates optimization of
low-level programs to hardware characteristics by employing a novel,
learning-based cost modeling method for rapid exploration of program
optimizations.

Moreover, there is increasing interest in designing specialized hardware which
accelerates machine learning. Towards this goal, TVM introduces VTA, an open
source deep learning accelerator as part of its stack. The open source VTA
driver and hardware design is a crucial step toward building software support
for future ASICs. The TVM-VTA flow acts as a is the great frontier for
researchers and practitioners to explore specialized hardware designs.


=== Rationale ===

Deep learning compilation will be the next frontier of machine learning systems.
TVM is already one of the leading open source projects pursuing this direction.

Specifically, TVM provides infrastructure to use machine learning to
automatically optimize deployment of deep learning programs on diverse hardware
backends.


=== VTA: Open Source Hardware Design ===

TVM also contains open source hardware as part of its stack. The VTA hardware
design is a fully open sourced deep learning accelerator that allows us to
experiment with compiler, driver, runtime, and execute the code on FPGA. VTA
provides a path to target future ASICs, and build software-driven solutions to
co-design future deep learning accelerators.

Having an open source hardware design in an ASF project is rare and perhaps
unprecedented. We put some of our rationale on why it is necessary for the
community.

Deep learning specialized ASICs are going to be at the center of the AI
revolution. However, given its early shape, there is no open standard, or even
any available information hardware interface that allows an open source software
to target to. VTA provides such open source hardware abstraction layer and
allows us to build in abstractions that can be effectively used to target other
deep learning accelerators.

Moreover, there is an increasing need for co-designing future of machine
learning systems with the hardware abstraction. Having a co-designed open source
hardware stack along with the software creates a path for this route. In short,
we need open-source hardware to build the best open source software.

Finally, we can still view VTA design as “software”, as its source code is
written in source description language and can generate “binary” which can run
on FPGA and possibly simulators.


=== Current Status ===

TVM is open sourced under the Apache License for one and half years. See the
current project website (https://tvm.ai/), Github
(https://github.com/dmlc/tvm/), as well as TVM Conference
(https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)

TVM has already been used in production, some highlights are AWS (Sagemaker
Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization). We
anticipate the list of adopters to grow over the next few years.

=== Meritocracy ===

The TVM stack began as a research project of the SAMPL group at Paul G. Allen
School of Computer Science & Engineering, University of Washington. The project
is now driven by an open source community involving multiple industry and
academic institutions. The project is currently governed by the Apache Way
(https://docs.tvm.ai/contribute/community.html). The project now has 14
committers and 6 PMCs, and the list is actively growing. The PMCs uses a google
group mail-list to vote in new committers/PMCs, which will be moved to private@
after incubation.

The community highly values open collaboration among contributors from different
backgrounds.The current committers come from UW, Berkeley, Cornell, SJTU, AMD,
AWS, Huawei, Google, Facebook, Ziosoft.


=== Community ===

The project currently has 173 contributors. As per the Apache way, all the
discussions are conducted in publicly archivable places.

- Github issues are used to track development activities and RFC.
- The roadmap is public and encourages participation from everyone in the
community.
- Discussion forums for general discussions. https://discuss.tvm.ai
- The content of the discourse forum can be considered as a public archive
as it is searchable with all the content
- We also created a mail-list archive of the forum, which we will forward to
an Apache mail-list after incubation
https://groups.google.com/forum/#!forum/tvm-discuss-archive

- See https://tvm.ai/community
- See https://github.com/dmlc/tvm/releases for past releases.

Currently, Github issue serves as dev@ channel. Notably, major features always
start from RFCs discussions to encourage broad participation in the community.

The community recognizes potential committers early by bringing contributors as
code reviewers and encourages them to participate in code reviews. Code reviews
and high-quality code are fundamental to the long-term success of the project.
The reviewer mechanism in the community serves a way to highlight this aspect as
well as helping the community find good candidates to promote to committers.



==== Development and Decision Process ====

See https://docs.tvm.ai/contribute/community.html#general-development-process
for the current development guideline. The key points are: Open public roadmap
during development, which turns into release notes Major features start with an
RFC, everything happens in public Encourage public discussion via archivable
channels Strive to reach a consensus on technical decisions through discussion
Moderation from committers and encourage everyone’s participation

Example Roadmap: https://github.com/dmlc/tvm/issues/1170
The idea is to keep an active list of roadmaps that can be turned directly
into a release note. Public roadmap helps to encourage general participation
from all contributors.

Example 1:
Recently a major proposal in the community is to bring in a new
high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673 The pull
request: https://github.com/dmlc/tvm/pull/1672 Everyone who participated in the
RFC is invited to review the code as well - Follow up features are proposed as
follow up RFCs.

Example 2: Community guideline improvements
RFC thread: https://github.com/dmlc/tvm/issues/2017
Slack channel setup as per community suggestion, but still encourage the
community to only use it for quick communication and use publicly archived
channels for development: https://github.com/dmlc/tvm/issues/2174

Example 3: Python3 timeline proposal
RFC thread: https://github.com/dmlc/tvm/issues/1602
Finished with the decision to respect backward compatibility and keep python2
support.

See
https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
for a full list of RFCs.


=== Alignment ===

TVM is useful for building deep learning deployment solutions. It is perhaps
also the first Apache incubator proposal that includes both open source software
and hardware system design.

It has the potential to benefit existing related ML projects such as MXNet,
Singa, SystemML, and Mahout by providing powerful low-level primitives for
matrix operations.


=== Known Risks ===

==== Orphaned products ====

The project has a diverse contributor base. As an example, the current
committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google, Facebook,
Ziosoft, Huawei. We are actively growing this list. Given that the project has
already been used in production, there is a minimum risk of the project being
abandoned.

==== Inexperience with Open Source ====

The TVM community has extensive experience in open source. Three of current five
PMCs are already PPMCs of existing Apache projects. Over the course of
development, the community already has a good way bringing RFCs, discussions and
most importantly, welcoming new contributors in the Apache way.

==== Homogenous Developers ====

The project has a diverse contributor base. As an example, the current
committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei, Google,
Facebook, Ziosoft. The community actively seeks to collaborative broadly. The
PMCs followed a principle to *only* nominate committers outside their own
organizations.


=== Reliance on Salaried Developers ===

Most of the current committers are volunteers.

=== Relationships with Other Apache Products ===

TVM can serve as a fundamental compiler stack for deep learning and machine
learning in general. We expect it can benefit projects like MXNet, Spark, Flink,
Mahout, and SystemML.

=== Documentation ===

See https://tvm.ai/

=== Initial Source ===

https://github.com/dmlc/tvm

We plan to move our repository to https://github.com/apache/incubator-tvm


=== Source and Intellectual Property Submission Plan ===

TVM source code is available under Apache V2 license. We will work with the
committers to get ICLAs signed.

=== External Dependencies ===

We put all the source level dependencies under
https://github.com/dmlc/tvm/tree/master/3rdparty

- dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
- dlpack (Apache2): https://github.com/dmlc/dlpack
- HalideIR (MIT): https://github.com/dmlc/HalideIR
- range(Unlicense): https://github.com/agauniyal/rang
- Compiler-RT (BSD)
- LLVM

All of the current he dependencies are stable, which means that the current TVM
repo is standalone and main development activities only happen at the TVM repo.
The dependencies are periodically updated in the rate about once a month when
necessary. For source level dependencies, we will always point to a stable
release version for software release in the future.


=== External Dependencies on DMLC projects ===

There are three dependencies to dmlc projects in the 3rdparty. The current
proposal is to keep the current dependencies in the 3rdparty. We elaborate on
the background of these dependencies below:

- dmlc-core: is a minimum module for logging and memory serialization. It is
currently used by projects including ApacheMXNet, TVM, and XGBoost. The
project is relatively stable, with around one change a week(most recent
changes comes from XGBoost project). TVM’s dependency on dmlc-core is minimum
and only uses its feature for logging.
- dlpack: is a minimum consensus standard for in-memory Tensor format. It is
currently used by PyTorch, ApacheMXNet, Chainer, and a few other projects.
- HalideIR: is a minimum IR data structure that is isolated from a fork of
Halide project. We keep the license to be MIT to respect the original license
and its origin. A common consensus in the TVM project is that we keep the old
derived code in HalideIR (which are stable), and all new developments happen
in the TVM repo.

The main reason to propose keep these dependencies are:
- Each of the dependencies has the user and developer community of its own
which is larger than the TVM community or different license options(MIT in
HalideIR)
- These dependencies are stable and update at a monthly rate.

While it is possible to fork the code in the tvm repo, given that the current
tvm repo is self-contained, and community development is stand-alone, we feel
that there are have enough justifications to treat these as 3rdparty
dependencies.


=== Required Resources ===

==== Mailing List: ====
The usual mailing lists are expected to be set up when entering incubation:

* private@tvm.apache.org
* dev@tvm.apache.org , subscribe github issues.
* discuss-archive@tvm.apache.org, Archive the discuss content of the
discourse user forum


Currently, we only use issues for developments and encourage community to use
discuss forums when possible. As a result, the current github issues serves
similar purposes as dev@, so we propose to subscribe github issues to dev@ after
incubation.

The current community use https://discuss.tvm.ai/ for general technical and
support discussions. The community forum is maintained by PMCs. We propose to
continue to use the forum and archive the posts to an Apache mail-list. We
already have the mechanism to do so (see
https://groups.google.com/forum/#!forum/tvm-discuss-archive)



==== Git Repositories: ====

Upon entering incubation, we plan to transfer the existing repo from
https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm.




==== Issue Tracking: ====

TVM currently uses GitHub to track issues. We would like to continue to do so
while we discuss migration possibilities with the ASF Infra team.

==== URL: ====

Current project website: https://tvm.ai/, as we proceed website will migrate to
https://tvm.incubator.apache.org and hopefully https://tvm.apache.org

=== Initial Committers and PMCs ===

As the project has already followed the Apache way of development(in terms of
meritocracy, community, and archive of public discussion). We plan to transition
the current PMCs to PPMCs , and committers to apache committers. There are also
ongoing votes and discussions in the current tvm PMC private mail-list about new
committers/PMCs(we also invited our tentative mentors as observers to the
mail-list). We plan to migrate the discussions to private@ after the proposal
has been accepted and bring in the new committers/PPMCs according to the
standard Apache community procedure.


Initial PPMCs
- Tianqi Chen tqchen@apache.org
- Ziheng Jiang ziheng@apache.org
- Yizhi Liu liuyizhi@apache.org
- Thierry Moreau moreau@cs.washington.edu
- Haichen Shen shenhaichen@gmail.com
- Lianmin Zheng lianminzheng@gmail.com
- Markus Weimer weimer@apache.org
- Sebastian Schelter
- Byung-Gon Chun

Initial Committers (Including PPMCs)
- Aditya Atluri Aditya.Atluri@amd.com AMD
- Tianqi Chen tqchen@apache.org University of Washington
- Yuwei Hu huyuwei1995@gmail.com Cornell
- Nick Hynes nhynes@berkeley.edu UC Berkeley
- Ziheng Jiang ziheng@apache.org University of Washington
- Yizhi Liu liuyizhi@apache.org AWS
- Thierry Moreau moreau@cs.washington.edu University of Washington
- Siva srk.it38@gmail.com Huawei
- Haichen Shen shenhaichen@gmail.com AWS
- Masahiro Masuda masahi129@gmail.com Ziosoft
- Zhixun Tan phisiart@gmail.com Google
- Leyuan Wang laurawly@gmail.com AWS
- Eddie Yan eqy@cs.washington.edu University of Washington
- Lianming Zheng lianminzheng@gmail.com Shanghai Jiao Tong University


=== Sponsors: ===

==== Champion: ====
* Markus Weimer, Microsoft

==== Mentors: ====
* Sebastian Schelter, New York University
* Byung-Gon Chun, Seoul National University

==== Sponsoring Entity ====
We are requesting the Incubator to sponsor this project.

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by Matt Sicker <bo...@gmail.com>.
Sounds like a rather exciting project! Very interesting to see open
source hardware, too. I agree that it's a valid area to act in, and it
will be increasingly necessary over time.

On Fri, 15 Feb 2019 at 14:18, Furkan KAMACI <fu...@gmail.com> wrote:
>
> Hi All,
>
> TVM is very promising and I am also so excited to see such a great
> project's proposal! I would love to be a mentor too if it is possible.
>
> Kind Regards,
> Furkan KAMACI
>
> On Fri, Feb 15, 2019 at 9:52 PM Timothy Chen <tn...@apache.org> wrote:
>
> > Very excited to see this proposed as well.
> >
> > I’d also like to volunteer mentoring if the community is open too.
> >
> > Tim
> >
> > On Fri, Feb 15, 2019 at 10:48 Henry Saputra <he...@gmail.com>
> > wrote:
> >
> > > HI Markus,
> > >
> > > I have been using TVM as part of ML platform work as consumer of the
> > > project, this is great news!
> > >
> > > Would love to come in and help as a Mentor of this project if it is Ok
> > with
> > > the community.
> > >
> > >
> > > Thanks,
> > >
> > > - Henry
> > >
> > > On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer <we...@apache.org>
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > we'd like to start the discussion of accepting TVM into the incubator.
> > > > Please see the proposal below. I'd like to highlight a few things for
> > > > our discussion:
> > > >
> > > > (1) The project already follows many Apache ways like meritocracy,
> > > > open development and such.
> > > >
> > > > (2) The project recognizes an in-between state of "reviewer" that it
> > > > nominates people for between contributor and committer status. We'd
> > > > like to learn if and how to maintain that in the future.
> > > >
> > > > (3) The project contains hardware as a software artifact. We are not
> > > > aware of another ASF project like that and wonder if and how it
> > > > affects its acceptance into the incubator.
> > > >
> > > > Thanks!
> > > >
> > > > Markus
> > > >
> > > > === Proposal ===
> > > >
> > > > We propose to incubate the TVM project the Apache Software Foundation.
> > > TVM
> > > > is a
> > > > full stack open deep learning compiler stack for CPUs, GPUs, and
> > > > specialized
> > > > accelerators. It aims to close the gap between the productivity-focused
> > > > deep
> > > > learning frameworks, and the performance- or efficiency-oriented
> > hardware
> > > > backends.
> > > >
> > > > === Background ===
> > > >
> > > > There is an increasing need to bring machine learning to a wide
> > diversity
> > > > of
> > > > hardware devices. Current frameworks rely on vendor-specific operator
> > > > libraries
> > > > and optimize for a narrow range of server-class GPUs. Deploying
> > workloads
> > > > to new
> > > > platforms -- such as mobile phones, embedded devices, and accelerators
> > > > (e.g.,
> > > > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to
> > end
> > > > deep
> > > > learning a compiler that exposes graph-level and operator-level
> > > > optimizations to
> > > > provide performance portability to deep learning workloads across
> > diverse
> > > > hardware back-ends. TVM solves optimization challenges specific to deep
> > > > learning, such as high-level operator fusion, mapping to arbitrary
> > > hardware
> > > > primitives, and memory latency hiding. It also automates optimization
> > of
> > > > low-level programs to hardware characteristics by employing a novel,
> > > > learning-based cost modeling method for rapid exploration of program
> > > > optimizations.
> > > >
> > > > Moreover, there is increasing interest in designing specialized
> > hardware
> > > > which
> > > > accelerates machine learning. Towards this goal, TVM introduces VTA, an
> > > > open
> > > > source deep learning accelerator as part of its stack. The open source
> > > VTA
> > > > driver and hardware design is a crucial step toward building software
> > > > support
> > > > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> > > > researchers and practitioners to explore specialized hardware designs.
> > > >
> > > >
> > > > === Rationale ===
> > > >
> > > > Deep learning compilation will be the next frontier of machine learning
> > > > systems.
> > > > TVM is already one of the leading open source projects pursuing this
> > > > direction.
> > > >
> > > > Specifically, TVM provides infrastructure to use machine learning to
> > > > automatically optimize deployment of deep learning programs on diverse
> > > > hardware
> > > > backends.
> > > >
> > > >
> > > > === VTA: Open Source Hardware Design ===
> > > >
> > > > TVM also contains open source hardware as part of its stack. The VTA
> > > > hardware
> > > > design is a fully open sourced deep learning accelerator that allows us
> > > to
> > > > experiment with compiler, driver, runtime, and execute the code on
> > FPGA.
> > > > VTA
> > > > provides a path to target future ASICs, and build software-driven
> > > > solutions to
> > > > co-design future deep learning accelerators.
> > > >
> > > > Having an open source hardware design in an ASF project is rare and
> > > perhaps
> > > > unprecedented. We put some of our rationale on why it is necessary for
> > > the
> > > > community.
> > > >
> > > > Deep learning specialized ASICs are going to be at the center of the AI
> > > > revolution. However, given its early shape, there is no open standard,
> > or
> > > > even
> > > > any available information hardware interface that allows an open source
> > > > software
> > > > to target to. VTA provides such open source hardware abstraction layer
> > > and
> > > > allows us to build in abstractions that can be effectively used to
> > target
> > > > other
> > > > deep learning accelerators.
> > > >
> > > > Moreover, there is an increasing need for co-designing future of
> > machine
> > > > learning systems with the hardware abstraction. Having a co-designed
> > open
> > > > source
> > > > hardware stack along with the software creates a path for this route.
> > In
> > > > short,
> > > > we need open-source hardware to build the best open source software.
> > > >
> > > > Finally, we can still view VTA design as “software”, as its source code
> > > is
> > > > written in source description language and can generate “binary” which
> > > can
> > > > run
> > > > on FPGA and possibly simulators.
> > > >
> > > >
> > > > === Current Status ===
> > > >
> > > > TVM is open sourced under the Apache License for one and half years.
> > See
> > > > the
> > > > current project website (https://tvm.ai/), Github
> > > > (https://github.com/dmlc/tvm/), as well as TVM Conference
> > > > (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
> > > >
> > > > TVM has already been used in production, some highlights are AWS
> > > (Sagemaker
> > > > Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization).
> > We
> > > > anticipate the list of adopters to grow over the next few years.
> > > >
> > > > === Meritocracy ===
> > > >
> > > > The TVM stack began as a research project of the SAMPL group at Paul G.
> > > > Allen
> > > > School of Computer Science & Engineering, University of Washington. The
> > > > project
> > > > is now driven by an open source community involving multiple industry
> > and
> > > > academic institutions. The project is currently governed by the Apache
> > > Way
> > > > (https://docs.tvm.ai/contribute/community.html). The project now has
> > 14
> > > > committers and 6 PMCs, and the list is actively growing. The PMCs uses
> > a
> > > > google
> > > > group mail-list to vote in new committers/PMCs, which will be moved to
> > > > private@
> > > > after incubation.
> > > >
> > > > The community highly values open collaboration among contributors from
> > > > different
> > > > backgrounds.The current committers come from UW, Berkeley, Cornell,
> > SJTU,
> > > > AMD,
> > > > AWS, Huawei, Google, Facebook, Ziosoft.
> > > >
> > > >
> > > > === Community ===
> > > >
> > > > The project currently has 173 contributors. As per the Apache way, all
> > > the
> > > > discussions are conducted in publicly archivable places.
> > > >
> > > > - Github issues are used to track development activities and RFC.
> > > > - The roadmap is public and encourages participation from everyone in
> > the
> > > > community.
> > > > - Discussion forums for general discussions. https://discuss.tvm.ai
> > > > - The content of the discourse forum can be considered as a public
> > > archive
> > > > as it is searchable with all the content
> > > > - We also created a mail-list archive of the forum, which we will
> > forward
> > > > to
> > > > an Apache mail-list after incubation
> > > > https://groups.google.com/forum/#!forum/tvm-discuss-archive
> > > >
> > > > - See https://tvm.ai/community
> > > > - See https://github.com/dmlc/tvm/releases for past releases.
> > > >
> > > > Currently, Github issue serves as dev@ channel. Notably, major
> > features
> > > > always
> > > > start from RFCs discussions to encourage broad participation in the
> > > > community.
> > > >
> > > > The community recognizes potential committers early by bringing
> > > > contributors as
> > > > code reviewers and encourages them to participate in code reviews. Code
> > > > reviews
> > > > and high-quality code are fundamental to the long-term success of the
> > > > project.
> > > > The reviewer mechanism in the community serves a way to highlight this
> > > > aspect as
> > > > well as helping the community find good candidates to promote to
> > > > committers.
> > > >
> > > >
> > > >
> > > > ==== Development and Decision Process ====
> > > >
> > > > See
> > > >
> > >
> > https://docs.tvm.ai/contribute/community.html#general-development-process
> > > > for the current development guideline. The key points are: Open public
> > > > roadmap
> > > > during development, which turns into release notes Major features start
> > > > with an
> > > > RFC, everything happens in public Encourage public discussion via
> > > > archivable
> > > > channels Strive to reach a consensus on technical decisions through
> > > > discussion
> > > > Moderation from committers and encourage everyone’s participation
> > > >
> > > > Example Roadmap: https://github.com/dmlc/tvm/issues/1170
> > > > The idea is to keep an active list of roadmaps that can be turned
> > > directly
> > > > into a release note. Public roadmap helps to encourage general
> > > > participation
> > > > from all contributors.
> > > >
> > > > Example 1:
> > > > Recently a major proposal in the community is to bring in a new
> > > > high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673 The
> > > > pull
> > > > request: https://github.com/dmlc/tvm/pull/1672 Everyone who
> > participated
> > > > in the
> > > > RFC is invited to review the code as well - Follow up features are
> > > > proposed as
> > > > follow up RFCs.
> > > >
> > > > Example 2: Community guideline improvements
> > > > RFC thread: https://github.com/dmlc/tvm/issues/2017
> > > > Slack channel setup as per community suggestion, but still encourage
> > the
> > > > community to only use it for quick communication and use publicly
> > > archived
> > > > channels for development: https://github.com/dmlc/tvm/issues/2174
> > > >
> > > > Example 3: Python3 timeline proposal
> > > > RFC thread: https://github.com/dmlc/tvm/issues/1602
> > > > Finished with the decision to respect backward compatibility and keep
> > > > python2
> > > > support.
> > > >
> > > > See
> > > >
> > > >
> > >
> > https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
> > > > for a full list of RFCs.
> > > >
> > > >
> > > > === Alignment ===
> > > >
> > > > TVM is useful for building deep learning deployment solutions. It is
> > > > perhaps
> > > > also the first Apache incubator proposal that includes both open source
> > > > software
> > > > and hardware system design.
> > > >
> > > > It has the potential to benefit existing related ML projects such as
> > > MXNet,
> > > > Singa, SystemML, and Mahout by providing powerful low-level primitives
> > > for
> > > > matrix operations.
> > > >
> > > >
> > > > === Known Risks ===
> > > >
> > > > ==== Orphaned products ====
> > > >
> > > > The project has a diverse contributor base. As an example, the current
> > > > committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
> > > > Facebook,
> > > > Ziosoft, Huawei. We are actively growing this list. Given that the
> > > project
> > > > has
> > > > already been used in production, there is a minimum risk of the project
> > > > being
> > > > abandoned.
> > > >
> > > > ==== Inexperience with Open Source ====
> > > >
> > > > The TVM community has extensive experience in open source. Three of
> > > > current five
> > > > PMCs are already PPMCs of existing Apache projects. Over the course of
> > > > development, the community already has a good way bringing RFCs,
> > > > discussions and
> > > > most importantly, welcoming new contributors in the Apache way.
> > > >
> > > > ==== Homogenous Developers ====
> > > >
> > > > The project has a diverse contributor base. As an example, the current
> > > > committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei,
> > > > Google,
> > > > Facebook, Ziosoft. The community actively seeks to collaborative
> > broadly.
> > > > The
> > > > PMCs followed a principle to *only* nominate committers outside their
> > own
> > > > organizations.
> > > >
> > > >
> > > > === Reliance on Salaried Developers ===
> > > >
> > > > Most of the current committers are volunteers.
> > > >
> > > > === Relationships with Other Apache Products ===
> > > >
> > > > TVM can serve as a fundamental compiler stack for deep learning and
> > > machine
> > > > learning in general. We expect it can benefit projects like MXNet,
> > Spark,
> > > > Flink,
> > > > Mahout, and SystemML.
> > > >
> > > > === Documentation ===
> > > >
> > > > See https://tvm.ai/
> > > >
> > > > === Initial Source ===
> > > >
> > > > https://github.com/dmlc/tvm
> > > >
> > > > We plan to move our repository to
> > > https://github.com/apache/incubator-tvm
> > > >
> > > >
> > > > === Source and Intellectual Property Submission Plan ===
> > > >
> > > > TVM source code is available under Apache V2 license. We will work with
> > > the
> > > > committers to get ICLAs signed.
> > > >
> > > > === External Dependencies ===
> > > >
> > > > We put all the source level dependencies under
> > > > https://github.com/dmlc/tvm/tree/master/3rdparty
> > > >
> > > > - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
> > > > - dlpack (Apache2): https://github.com/dmlc/dlpack
> > > > - HalideIR (MIT): https://github.com/dmlc/HalideIR
> > > > - range(Unlicense): https://github.com/agauniyal/rang
> > > > - Compiler-RT (BSD)
> > > > - LLVM
> > > >
> > > > All of the current he dependencies are stable, which means that the
> > > > current TVM
> > > > repo is standalone and main development activities only happen at the
> > TVM
> > > > repo.
> > > > The dependencies are periodically updated in the rate about once a
> > month
> > > > when
> > > > necessary. For source level dependencies, we will always point to a
> > > stable
> > > > release version for software release in the future.
> > > >
> > > >
> > > > === External Dependencies on DMLC projects ===
> > > >
> > > > There are three dependencies to dmlc projects in the 3rdparty. The
> > > current
> > > > proposal is to keep the current dependencies in the 3rdparty. We
> > > elaborate
> > > > on
> > > > the background of these dependencies below:
> > > >
> > > > - dmlc-core: is a minimum module for logging and memory serialization.
> > It
> > > > is
> > > > currently used by projects including ApacheMXNet, TVM, and XGBoost. The
> > > > project is relatively stable, with around one change a week(most recent
> > > > changes comes from XGBoost project). TVM’s dependency on dmlc-core is
> > > > minimum
> > > > and only uses its feature for logging.
> > > > - dlpack: is a minimum consensus standard for in-memory Tensor format.
> > It
> > > > is
> > > > currently used by PyTorch, ApacheMXNet, Chainer, and a few other
> > > projects.
> > > > - HalideIR: is a minimum IR data structure that is isolated from a fork
> > > of
> > > > Halide project. We keep the license to be MIT to respect the original
> > > > license
> > > > and its origin. A common consensus in the TVM project is that we keep
> > the
> > > > old
> > > > derived code in HalideIR (which are stable), and all new developments
> > > > happen
> > > > in the TVM repo.
> > > >
> > > > The main reason to propose keep these dependencies are:
> > > > - Each of the dependencies has the user and developer community of its
> > > own
> > > > which is larger than the TVM community or different license options(MIT
> > > in
> > > > HalideIR)
> > > > - These dependencies are stable and update at a monthly rate.
> > > >
> > > > While it is possible to fork the code in the tvm repo, given that the
> > > > current
> > > > tvm repo is self-contained, and community development is stand-alone,
> > we
> > > > feel
> > > > that there are have enough justifications to treat these as 3rdparty
> > > > dependencies.
> > > >
> > > >
> > > > === Required Resources ===
> > > >
> > > > ==== Mailing List: ====
> > > > The usual mailing lists are expected to be set up when entering
> > > incubation:
> > > >
> > > > * private@tvm.apache.org
> > > > * dev@tvm.apache.org , subscribe github issues.
> > > > * discuss-archive@tvm.apache.org, Archive the discuss content of the
> > > > discourse user forum
> > > >
> > > >
> > > > Currently, we only use issues for developments and encourage community
> > to
> > > > use
> > > > discuss forums when possible. As a result, the current github issues
> > > serves
> > > > similar purposes as dev@, so we propose to subscribe github issues to
> > > dev@
> > > > after
> > > > incubation.
> > > >
> > > > The current community use https://discuss.tvm.ai/ for general
> > technical
> > > > and
> > > > support discussions. The community forum is maintained by PMCs. We
> > > propose
> > > > to
> > > > continue to use the forum and archive the posts to an Apache mail-list.
> > > We
> > > > already have the mechanism to do so (see
> > > > https://groups.google.com/forum/#!forum/tvm-discuss-archive)
> > > >
> > > >
> > > >
> > > > ==== Git Repositories: ====
> > > >
> > > > Upon entering incubation, we plan to transfer the existing repo from
> > > > https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm
> > .
> > > >
> > > >
> > > >
> > > >
> > > > ==== Issue Tracking: ====
> > > >
> > > > TVM currently uses GitHub to track issues. We would like to continue to
> > > do
> > > > so
> > > > while we discuss migration possibilities with the ASF Infra team.
> > > >
> > > > ==== URL: ====
> > > >
> > > > Current project website: https://tvm.ai/, as we proceed website will
> > > > migrate to
> > > > https://tvm.incubator.apache.org and hopefully https://tvm.apache.org
> > > >
> > > > === Initial Committers and PMCs ===
> > > >
> > > > As the project has already followed the Apache way of development(in
> > > terms
> > > > of
> > > > meritocracy, community, and archive of public discussion). We plan to
> > > > transition
> > > > the current PMCs to PPMCs , and committers to apache committers. There
> > > are
> > > > also
> > > > ongoing votes and discussions in the current tvm PMC private mail-list
> > > > about new
> > > > committers/PMCs(we also invited our tentative mentors as observers to
> > the
> > > > mail-list). We plan to migrate the discussions to private@ after the
> > > > proposal
> > > > has been accepted and bring in the new committers/PPMCs according to
> > the
> > > > standard Apache community procedure.
> > > >
> > > >
> > > > Initial PPMCs
> > > > - Tianqi Chen tqchen@apache.org
> > > > - Ziheng Jiang ziheng@apache.org
> > > > - Yizhi Liu liuyizhi@apache.org
> > > > - Thierry Moreau moreau@cs.washington.edu
> > > > - Haichen Shen shenhaichen@gmail.com
> > > > - Lianmin Zheng lianminzheng@gmail.com
> > > > - Markus Weimer weimer@apache.org
> > > > - Sebastian Schelter
> > > > - Byung-Gon Chun
> > > >
> > > > Initial Committers (Including PPMCs)
> > > > - Aditya Atluri Aditya.Atluri@amd.com AMD
> > > > - Tianqi Chen tqchen@apache.org University of Washington
> > > > - Yuwei Hu huyuwei1995@gmail.com Cornell
> > > > - Nick Hynes nhynes@berkeley.edu UC Berkeley
> > > > - Ziheng Jiang ziheng@apache.org University of Washington
> > > > - Yizhi Liu liuyizhi@apache.org AWS
> > > > - Thierry Moreau moreau@cs.washington.edu University of Washington
> > > > - Siva srk.it38@gmail.com Huawei
> > > > - Haichen Shen shenhaichen@gmail.com AWS
> > > > - Masahiro Masuda masahi129@gmail.com Ziosoft
> > > > - Zhixun Tan phisiart@gmail.com Google
> > > > - Leyuan Wang laurawly@gmail.com AWS
> > > > - Eddie Yan eqy@cs.washington.edu University of Washington
> > > > - Lianming Zheng lianminzheng@gmail.com Shanghai Jiao Tong University
> > > >
> > > >
> > > > === Sponsors: ===
> > > >
> > > > ==== Champion: ====
> > > > * Markus Weimer, Microsoft
> > > >
> > > > ==== Mentors: ====
> > > > * Sebastian Schelter, New York University
> > > > * Byung-Gon Chun, Seoul National University
> > > >
> > > > ==== Sponsoring Entity ====
> > > > We are requesting the Incubator to sponsor this project.
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail: general-help@incubator.apache.org
> > > >
> > > >
> > >
> >



-- 
Matt Sicker <bo...@gmail.com>

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by Furkan KAMACI <fu...@gmail.com>.
Hi All,

TVM is very promising and I am also so excited to see such a great
project's proposal! I would love to be a mentor too if it is possible.

Kind Regards,
Furkan KAMACI

On Fri, Feb 15, 2019 at 9:52 PM Timothy Chen <tn...@apache.org> wrote:

> Very excited to see this proposed as well.
>
> I’d also like to volunteer mentoring if the community is open too.
>
> Tim
>
> On Fri, Feb 15, 2019 at 10:48 Henry Saputra <he...@gmail.com>
> wrote:
>
> > HI Markus,
> >
> > I have been using TVM as part of ML platform work as consumer of the
> > project, this is great news!
> >
> > Would love to come in and help as a Mentor of this project if it is Ok
> with
> > the community.
> >
> >
> > Thanks,
> >
> > - Henry
> >
> > On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer <we...@apache.org>
> wrote:
> >
> > > Hi,
> > >
> > > we'd like to start the discussion of accepting TVM into the incubator.
> > > Please see the proposal below. I'd like to highlight a few things for
> > > our discussion:
> > >
> > > (1) The project already follows many Apache ways like meritocracy,
> > > open development and such.
> > >
> > > (2) The project recognizes an in-between state of "reviewer" that it
> > > nominates people for between contributor and committer status. We'd
> > > like to learn if and how to maintain that in the future.
> > >
> > > (3) The project contains hardware as a software artifact. We are not
> > > aware of another ASF project like that and wonder if and how it
> > > affects its acceptance into the incubator.
> > >
> > > Thanks!
> > >
> > > Markus
> > >
> > > === Proposal ===
> > >
> > > We propose to incubate the TVM project the Apache Software Foundation.
> > TVM
> > > is a
> > > full stack open deep learning compiler stack for CPUs, GPUs, and
> > > specialized
> > > accelerators. It aims to close the gap between the productivity-focused
> > > deep
> > > learning frameworks, and the performance- or efficiency-oriented
> hardware
> > > backends.
> > >
> > > === Background ===
> > >
> > > There is an increasing need to bring machine learning to a wide
> diversity
> > > of
> > > hardware devices. Current frameworks rely on vendor-specific operator
> > > libraries
> > > and optimize for a narrow range of server-class GPUs. Deploying
> workloads
> > > to new
> > > platforms -- such as mobile phones, embedded devices, and accelerators
> > > (e.g.,
> > > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to
> end
> > > deep
> > > learning a compiler that exposes graph-level and operator-level
> > > optimizations to
> > > provide performance portability to deep learning workloads across
> diverse
> > > hardware back-ends. TVM solves optimization challenges specific to deep
> > > learning, such as high-level operator fusion, mapping to arbitrary
> > hardware
> > > primitives, and memory latency hiding. It also automates optimization
> of
> > > low-level programs to hardware characteristics by employing a novel,
> > > learning-based cost modeling method for rapid exploration of program
> > > optimizations.
> > >
> > > Moreover, there is increasing interest in designing specialized
> hardware
> > > which
> > > accelerates machine learning. Towards this goal, TVM introduces VTA, an
> > > open
> > > source deep learning accelerator as part of its stack. The open source
> > VTA
> > > driver and hardware design is a crucial step toward building software
> > > support
> > > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> > > researchers and practitioners to explore specialized hardware designs.
> > >
> > >
> > > === Rationale ===
> > >
> > > Deep learning compilation will be the next frontier of machine learning
> > > systems.
> > > TVM is already one of the leading open source projects pursuing this
> > > direction.
> > >
> > > Specifically, TVM provides infrastructure to use machine learning to
> > > automatically optimize deployment of deep learning programs on diverse
> > > hardware
> > > backends.
> > >
> > >
> > > === VTA: Open Source Hardware Design ===
> > >
> > > TVM also contains open source hardware as part of its stack. The VTA
> > > hardware
> > > design is a fully open sourced deep learning accelerator that allows us
> > to
> > > experiment with compiler, driver, runtime, and execute the code on
> FPGA.
> > > VTA
> > > provides a path to target future ASICs, and build software-driven
> > > solutions to
> > > co-design future deep learning accelerators.
> > >
> > > Having an open source hardware design in an ASF project is rare and
> > perhaps
> > > unprecedented. We put some of our rationale on why it is necessary for
> > the
> > > community.
> > >
> > > Deep learning specialized ASICs are going to be at the center of the AI
> > > revolution. However, given its early shape, there is no open standard,
> or
> > > even
> > > any available information hardware interface that allows an open source
> > > software
> > > to target to. VTA provides such open source hardware abstraction layer
> > and
> > > allows us to build in abstractions that can be effectively used to
> target
> > > other
> > > deep learning accelerators.
> > >
> > > Moreover, there is an increasing need for co-designing future of
> machine
> > > learning systems with the hardware abstraction. Having a co-designed
> open
> > > source
> > > hardware stack along with the software creates a path for this route.
> In
> > > short,
> > > we need open-source hardware to build the best open source software.
> > >
> > > Finally, we can still view VTA design as “software”, as its source code
> > is
> > > written in source description language and can generate “binary” which
> > can
> > > run
> > > on FPGA and possibly simulators.
> > >
> > >
> > > === Current Status ===
> > >
> > > TVM is open sourced under the Apache License for one and half years.
> See
> > > the
> > > current project website (https://tvm.ai/), Github
> > > (https://github.com/dmlc/tvm/), as well as TVM Conference
> > > (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
> > >
> > > TVM has already been used in production, some highlights are AWS
> > (Sagemaker
> > > Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization).
> We
> > > anticipate the list of adopters to grow over the next few years.
> > >
> > > === Meritocracy ===
> > >
> > > The TVM stack began as a research project of the SAMPL group at Paul G.
> > > Allen
> > > School of Computer Science & Engineering, University of Washington. The
> > > project
> > > is now driven by an open source community involving multiple industry
> and
> > > academic institutions. The project is currently governed by the Apache
> > Way
> > > (https://docs.tvm.ai/contribute/community.html). The project now has
> 14
> > > committers and 6 PMCs, and the list is actively growing. The PMCs uses
> a
> > > google
> > > group mail-list to vote in new committers/PMCs, which will be moved to
> > > private@
> > > after incubation.
> > >
> > > The community highly values open collaboration among contributors from
> > > different
> > > backgrounds.The current committers come from UW, Berkeley, Cornell,
> SJTU,
> > > AMD,
> > > AWS, Huawei, Google, Facebook, Ziosoft.
> > >
> > >
> > > === Community ===
> > >
> > > The project currently has 173 contributors. As per the Apache way, all
> > the
> > > discussions are conducted in publicly archivable places.
> > >
> > > - Github issues are used to track development activities and RFC.
> > > - The roadmap is public and encourages participation from everyone in
> the
> > > community.
> > > - Discussion forums for general discussions. https://discuss.tvm.ai
> > > - The content of the discourse forum can be considered as a public
> > archive
> > > as it is searchable with all the content
> > > - We also created a mail-list archive of the forum, which we will
> forward
> > > to
> > > an Apache mail-list after incubation
> > > https://groups.google.com/forum/#!forum/tvm-discuss-archive
> > >
> > > - See https://tvm.ai/community
> > > - See https://github.com/dmlc/tvm/releases for past releases.
> > >
> > > Currently, Github issue serves as dev@ channel. Notably, major
> features
> > > always
> > > start from RFCs discussions to encourage broad participation in the
> > > community.
> > >
> > > The community recognizes potential committers early by bringing
> > > contributors as
> > > code reviewers and encourages them to participate in code reviews. Code
> > > reviews
> > > and high-quality code are fundamental to the long-term success of the
> > > project.
> > > The reviewer mechanism in the community serves a way to highlight this
> > > aspect as
> > > well as helping the community find good candidates to promote to
> > > committers.
> > >
> > >
> > >
> > > ==== Development and Decision Process ====
> > >
> > > See
> > >
> >
> https://docs.tvm.ai/contribute/community.html#general-development-process
> > > for the current development guideline. The key points are: Open public
> > > roadmap
> > > during development, which turns into release notes Major features start
> > > with an
> > > RFC, everything happens in public Encourage public discussion via
> > > archivable
> > > channels Strive to reach a consensus on technical decisions through
> > > discussion
> > > Moderation from committers and encourage everyone’s participation
> > >
> > > Example Roadmap: https://github.com/dmlc/tvm/issues/1170
> > > The idea is to keep an active list of roadmaps that can be turned
> > directly
> > > into a release note. Public roadmap helps to encourage general
> > > participation
> > > from all contributors.
> > >
> > > Example 1:
> > > Recently a major proposal in the community is to bring in a new
> > > high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673 The
> > > pull
> > > request: https://github.com/dmlc/tvm/pull/1672 Everyone who
> participated
> > > in the
> > > RFC is invited to review the code as well - Follow up features are
> > > proposed as
> > > follow up RFCs.
> > >
> > > Example 2: Community guideline improvements
> > > RFC thread: https://github.com/dmlc/tvm/issues/2017
> > > Slack channel setup as per community suggestion, but still encourage
> the
> > > community to only use it for quick communication and use publicly
> > archived
> > > channels for development: https://github.com/dmlc/tvm/issues/2174
> > >
> > > Example 3: Python3 timeline proposal
> > > RFC thread: https://github.com/dmlc/tvm/issues/1602
> > > Finished with the decision to respect backward compatibility and keep
> > > python2
> > > support.
> > >
> > > See
> > >
> > >
> >
> https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
> > > for a full list of RFCs.
> > >
> > >
> > > === Alignment ===
> > >
> > > TVM is useful for building deep learning deployment solutions. It is
> > > perhaps
> > > also the first Apache incubator proposal that includes both open source
> > > software
> > > and hardware system design.
> > >
> > > It has the potential to benefit existing related ML projects such as
> > MXNet,
> > > Singa, SystemML, and Mahout by providing powerful low-level primitives
> > for
> > > matrix operations.
> > >
> > >
> > > === Known Risks ===
> > >
> > > ==== Orphaned products ====
> > >
> > > The project has a diverse contributor base. As an example, the current
> > > committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
> > > Facebook,
> > > Ziosoft, Huawei. We are actively growing this list. Given that the
> > project
> > > has
> > > already been used in production, there is a minimum risk of the project
> > > being
> > > abandoned.
> > >
> > > ==== Inexperience with Open Source ====
> > >
> > > The TVM community has extensive experience in open source. Three of
> > > current five
> > > PMCs are already PPMCs of existing Apache projects. Over the course of
> > > development, the community already has a good way bringing RFCs,
> > > discussions and
> > > most importantly, welcoming new contributors in the Apache way.
> > >
> > > ==== Homogenous Developers ====
> > >
> > > The project has a diverse contributor base. As an example, the current
> > > committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei,
> > > Google,
> > > Facebook, Ziosoft. The community actively seeks to collaborative
> broadly.
> > > The
> > > PMCs followed a principle to *only* nominate committers outside their
> own
> > > organizations.
> > >
> > >
> > > === Reliance on Salaried Developers ===
> > >
> > > Most of the current committers are volunteers.
> > >
> > > === Relationships with Other Apache Products ===
> > >
> > > TVM can serve as a fundamental compiler stack for deep learning and
> > machine
> > > learning in general. We expect it can benefit projects like MXNet,
> Spark,
> > > Flink,
> > > Mahout, and SystemML.
> > >
> > > === Documentation ===
> > >
> > > See https://tvm.ai/
> > >
> > > === Initial Source ===
> > >
> > > https://github.com/dmlc/tvm
> > >
> > > We plan to move our repository to
> > https://github.com/apache/incubator-tvm
> > >
> > >
> > > === Source and Intellectual Property Submission Plan ===
> > >
> > > TVM source code is available under Apache V2 license. We will work with
> > the
> > > committers to get ICLAs signed.
> > >
> > > === External Dependencies ===
> > >
> > > We put all the source level dependencies under
> > > https://github.com/dmlc/tvm/tree/master/3rdparty
> > >
> > > - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
> > > - dlpack (Apache2): https://github.com/dmlc/dlpack
> > > - HalideIR (MIT): https://github.com/dmlc/HalideIR
> > > - range(Unlicense): https://github.com/agauniyal/rang
> > > - Compiler-RT (BSD)
> > > - LLVM
> > >
> > > All of the current he dependencies are stable, which means that the
> > > current TVM
> > > repo is standalone and main development activities only happen at the
> TVM
> > > repo.
> > > The dependencies are periodically updated in the rate about once a
> month
> > > when
> > > necessary. For source level dependencies, we will always point to a
> > stable
> > > release version for software release in the future.
> > >
> > >
> > > === External Dependencies on DMLC projects ===
> > >
> > > There are three dependencies to dmlc projects in the 3rdparty. The
> > current
> > > proposal is to keep the current dependencies in the 3rdparty. We
> > elaborate
> > > on
> > > the background of these dependencies below:
> > >
> > > - dmlc-core: is a minimum module for logging and memory serialization.
> It
> > > is
> > > currently used by projects including ApacheMXNet, TVM, and XGBoost. The
> > > project is relatively stable, with around one change a week(most recent
> > > changes comes from XGBoost project). TVM’s dependency on dmlc-core is
> > > minimum
> > > and only uses its feature for logging.
> > > - dlpack: is a minimum consensus standard for in-memory Tensor format.
> It
> > > is
> > > currently used by PyTorch, ApacheMXNet, Chainer, and a few other
> > projects.
> > > - HalideIR: is a minimum IR data structure that is isolated from a fork
> > of
> > > Halide project. We keep the license to be MIT to respect the original
> > > license
> > > and its origin. A common consensus in the TVM project is that we keep
> the
> > > old
> > > derived code in HalideIR (which are stable), and all new developments
> > > happen
> > > in the TVM repo.
> > >
> > > The main reason to propose keep these dependencies are:
> > > - Each of the dependencies has the user and developer community of its
> > own
> > > which is larger than the TVM community or different license options(MIT
> > in
> > > HalideIR)
> > > - These dependencies are stable and update at a monthly rate.
> > >
> > > While it is possible to fork the code in the tvm repo, given that the
> > > current
> > > tvm repo is self-contained, and community development is stand-alone,
> we
> > > feel
> > > that there are have enough justifications to treat these as 3rdparty
> > > dependencies.
> > >
> > >
> > > === Required Resources ===
> > >
> > > ==== Mailing List: ====
> > > The usual mailing lists are expected to be set up when entering
> > incubation:
> > >
> > > * private@tvm.apache.org
> > > * dev@tvm.apache.org , subscribe github issues.
> > > * discuss-archive@tvm.apache.org, Archive the discuss content of the
> > > discourse user forum
> > >
> > >
> > > Currently, we only use issues for developments and encourage community
> to
> > > use
> > > discuss forums when possible. As a result, the current github issues
> > serves
> > > similar purposes as dev@, so we propose to subscribe github issues to
> > dev@
> > > after
> > > incubation.
> > >
> > > The current community use https://discuss.tvm.ai/ for general
> technical
> > > and
> > > support discussions. The community forum is maintained by PMCs. We
> > propose
> > > to
> > > continue to use the forum and archive the posts to an Apache mail-list.
> > We
> > > already have the mechanism to do so (see
> > > https://groups.google.com/forum/#!forum/tvm-discuss-archive)
> > >
> > >
> > >
> > > ==== Git Repositories: ====
> > >
> > > Upon entering incubation, we plan to transfer the existing repo from
> > > https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm
> .
> > >
> > >
> > >
> > >
> > > ==== Issue Tracking: ====
> > >
> > > TVM currently uses GitHub to track issues. We would like to continue to
> > do
> > > so
> > > while we discuss migration possibilities with the ASF Infra team.
> > >
> > > ==== URL: ====
> > >
> > > Current project website: https://tvm.ai/, as we proceed website will
> > > migrate to
> > > https://tvm.incubator.apache.org and hopefully https://tvm.apache.org
> > >
> > > === Initial Committers and PMCs ===
> > >
> > > As the project has already followed the Apache way of development(in
> > terms
> > > of
> > > meritocracy, community, and archive of public discussion). We plan to
> > > transition
> > > the current PMCs to PPMCs , and committers to apache committers. There
> > are
> > > also
> > > ongoing votes and discussions in the current tvm PMC private mail-list
> > > about new
> > > committers/PMCs(we also invited our tentative mentors as observers to
> the
> > > mail-list). We plan to migrate the discussions to private@ after the
> > > proposal
> > > has been accepted and bring in the new committers/PPMCs according to
> the
> > > standard Apache community procedure.
> > >
> > >
> > > Initial PPMCs
> > > - Tianqi Chen tqchen@apache.org
> > > - Ziheng Jiang ziheng@apache.org
> > > - Yizhi Liu liuyizhi@apache.org
> > > - Thierry Moreau moreau@cs.washington.edu
> > > - Haichen Shen shenhaichen@gmail.com
> > > - Lianmin Zheng lianminzheng@gmail.com
> > > - Markus Weimer weimer@apache.org
> > > - Sebastian Schelter
> > > - Byung-Gon Chun
> > >
> > > Initial Committers (Including PPMCs)
> > > - Aditya Atluri Aditya.Atluri@amd.com AMD
> > > - Tianqi Chen tqchen@apache.org University of Washington
> > > - Yuwei Hu huyuwei1995@gmail.com Cornell
> > > - Nick Hynes nhynes@berkeley.edu UC Berkeley
> > > - Ziheng Jiang ziheng@apache.org University of Washington
> > > - Yizhi Liu liuyizhi@apache.org AWS
> > > - Thierry Moreau moreau@cs.washington.edu University of Washington
> > > - Siva srk.it38@gmail.com Huawei
> > > - Haichen Shen shenhaichen@gmail.com AWS
> > > - Masahiro Masuda masahi129@gmail.com Ziosoft
> > > - Zhixun Tan phisiart@gmail.com Google
> > > - Leyuan Wang laurawly@gmail.com AWS
> > > - Eddie Yan eqy@cs.washington.edu University of Washington
> > > - Lianming Zheng lianminzheng@gmail.com Shanghai Jiao Tong University
> > >
> > >
> > > === Sponsors: ===
> > >
> > > ==== Champion: ====
> > > * Markus Weimer, Microsoft
> > >
> > > ==== Mentors: ====
> > > * Sebastian Schelter, New York University
> > > * Byung-Gon Chun, Seoul National University
> > >
> > > ==== Sponsoring Entity ====
> > > We are requesting the Incubator to sponsor this project.
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: general-help@incubator.apache.org
> > >
> > >
> >
>

Re: [Proposal] Apache TVM

Posted by Liang Chen <ch...@gmail.com>.
Hi

+1 also,  excited to see TVM proposal.

Regards
Liang


Timothy Chen-2 wrote
> Very excited to see this proposed as well.
> 
> I’d also like to volunteer mentoring if the community is open too.
> 
> Tim
> 
> On Fri, Feb 15, 2019 at 10:48 Henry Saputra &lt;

> henry.saputra@

> &gt; wrote:
> 
>> HI Markus,
>>
>> I have been using TVM as part of ML platform work as consumer of the
>> project, this is great news!
>>
>> Would love to come in and help as a Mentor of this project if it is Ok
>> with
>> the community.
>>
>>
>> Thanks,
>>
>> - Henry
>>
>> On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer &lt;

> weimer@

> &gt; wrote:
>>
>> > Hi,
>> >
>> > we'd like to start the discussion of accepting TVM into the incubator.
>> > Please see the proposal below. I'd like to highlight a few things for
>> > our discussion:
>> >
>> > (1) The project already follows many Apache ways like meritocracy,
>> > open development and such.
>> >
>> > (2) The project recognizes an in-between state of "reviewer" that it
>> > nominates people for between contributor and committer status. We'd
>> > like to learn if and how to maintain that in the future.
>> >
>> > (3) The project contains hardware as a software artifact. We are not
>> > aware of another ASF project like that and wonder if and how it
>> > affects its acceptance into the incubator.
>> >
>> > Thanks!
>> >
>> > Markus
>> >
>> > === Proposal ===
>> >
>> > We propose to incubate the TVM project the Apache Software Foundation.
>> TVM
>> > is a
>> > full stack open deep learning compiler stack for CPUs, GPUs, and
>> > specialized
>> > accelerators. It aims to close the gap between the productivity-focused
>> > deep
>> > learning frameworks, and the performance- or efficiency-oriented
>> hardware
>> > backends.
>> >
>> > === Background ===
>> >
>> > There is an increasing need to bring machine learning to a wide
>> diversity
>> > of
>> > hardware devices. Current frameworks rely on vendor-specific operator
>> > libraries
>> > and optimize for a narrow range of server-class GPUs. Deploying
>> workloads
>> > to new
>> > platforms -- such as mobile phones, embedded devices, and accelerators
>> > (e.g.,
>> > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to
>> end
>> > deep
>> > learning a compiler that exposes graph-level and operator-level
>> > optimizations to
>> > provide performance portability to deep learning workloads across
>> diverse
>> > hardware back-ends. TVM solves optimization challenges specific to deep
>> > learning, such as high-level operator fusion, mapping to arbitrary
>> hardware
>> > primitives, and memory latency hiding. It also automates optimization
>> of
>> > low-level programs to hardware characteristics by employing a novel,
>> > learning-based cost modeling method for rapid exploration of program
>> > optimizations.
>> >
>> > Moreover, there is increasing interest in designing specialized
>> hardware
>> > which
>> > accelerates machine learning. Towards this goal, TVM introduces VTA, an
>> > open
>> > source deep learning accelerator as part of its stack. The open source
>> VTA
>> > driver and hardware design is a crucial step toward building software
>> > support
>> > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
>> > researchers and practitioners to explore specialized hardware designs.
>> >
>> >
>> > === Rationale ===
>> >
>> > Deep learning compilation will be the next frontier of machine learning
>> > systems.
>> > TVM is already one of the leading open source projects pursuing this
>> > direction.
>> >
>> > Specifically, TVM provides infrastructure to use machine learning to
>> > automatically optimize deployment of deep learning programs on diverse
>> > hardware
>> > backends.
>> >
>> >
>> > === VTA: Open Source Hardware Design ===
>> >
>> > TVM also contains open source hardware as part of its stack. The VTA
>> > hardware
>> > design is a fully open sourced deep learning accelerator that allows us
>> to
>> > experiment with compiler, driver, runtime, and execute the code on
>> FPGA.
>> > VTA
>> > provides a path to target future ASICs, and build software-driven
>> > solutions to
>> > co-design future deep learning accelerators.
>> >
>> > Having an open source hardware design in an ASF project is rare and
>> perhaps
>> > unprecedented. We put some of our rationale on why it is necessary for
>> the
>> > community.
>> >
>> > Deep learning specialized ASICs are going to be at the center of the AI
>> > revolution. However, given its early shape, there is no open standard,
>> or
>> > even
>> > any available information hardware interface that allows an open source
>> > software
>> > to target to. VTA provides such open source hardware abstraction layer
>> and
>> > allows us to build in abstractions that can be effectively used to
>> target
>> > other
>> > deep learning accelerators.
>> >
>> > Moreover, there is an increasing need for co-designing future of
>> machine
>> > learning systems with the hardware abstraction. Having a co-designed
>> open
>> > source
>> > hardware stack along with the software creates a path for this route.
>> In
>> > short,
>> > we need open-source hardware to build the best open source software.
>> >
>> > Finally, we can still view VTA design as “software”, as its source code
>> is
>> > written in source description language and can generate “binary” which
>> can
>> > run
>> > on FPGA and possibly simulators.
>> >
>> >
>> > === Current Status ===
>> >
>> > TVM is open sourced under the Apache License for one and half years.
>> See
>> > the
>> > current project website (https://tvm.ai/), Github
>> > (https://github.com/dmlc/tvm/), as well as TVM Conference
>> > (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
>> >
>> > TVM has already been used in production, some highlights are AWS
>> (Sagemaker
>> > Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization).
>> We
>> > anticipate the list of adopters to grow over the next few years.
>> >
>> > === Meritocracy ===
>> >
>> > The TVM stack began as a research project of the SAMPL group at Paul G.
>> > Allen
>> > School of Computer Science & Engineering, University of Washington. The
>> > project
>> > is now driven by an open source community involving multiple industry
>> and
>> > academic institutions. The project is currently governed by the Apache
>> Way
>> > (https://docs.tvm.ai/contribute/community.html). The project now has 14
>> > committers and 6 PMCs, and the list is actively growing. The PMCs uses
>> a
>> > google
>> > group mail-list to vote in new committers/PMCs, which will be moved to
>> > private@
>> > after incubation.
>> >
>> > The community highly values open collaboration among contributors from
>> > different
>> > backgrounds.The current committers come from UW, Berkeley, Cornell,
>> SJTU,
>> > AMD,
>> > AWS, Huawei, Google, Facebook, Ziosoft.
>> >
>> >
>> > === Community ===
>> >
>> > The project currently has 173 contributors. As per the Apache way, all
>> the
>> > discussions are conducted in publicly archivable places.
>> >
>> > - Github issues are used to track development activities and RFC.
>> > - The roadmap is public and encourages participation from everyone in
>> the
>> > community.
>> > - Discussion forums for general discussions. https://discuss.tvm.ai
>> > - The content of the discourse forum can be considered as a public
>> archive
>> > as it is searchable with all the content
>> > - We also created a mail-list archive of the forum, which we will
>> forward
>> > to
>> > an Apache mail-list after incubation
>> > https://groups.google.com/forum/#!forum/tvm-discuss-archive
>> >
>> > - See https://tvm.ai/community
>> > - See https://github.com/dmlc/tvm/releases for past releases.
>> >
>> > Currently, Github issue serves as dev@ channel. Notably, major features
>> > always
>> > start from RFCs discussions to encourage broad participation in the
>> > community.
>> >
>> > The community recognizes potential committers early by bringing
>> > contributors as
>> > code reviewers and encourages them to participate in code reviews. Code
>> > reviews
>> > and high-quality code are fundamental to the long-term success of the
>> > project.
>> > The reviewer mechanism in the community serves a way to highlight this
>> > aspect as
>> > well as helping the community find good candidates to promote to
>> > committers.
>> >
>> >
>> >
>> > ==== Development and Decision Process ====
>> >
>> > See
>> >
>> https://docs.tvm.ai/contribute/community.html#general-development-process
>> > for the current development guideline. The key points are: Open public
>> > roadmap
>> > during development, which turns into release notes Major features start
>> > with an
>> > RFC, everything happens in public Encourage public discussion via
>> > archivable
>> > channels Strive to reach a consensus on technical decisions through
>> > discussion
>> > Moderation from committers and encourage everyone’s participation
>> >
>> > Example Roadmap: https://github.com/dmlc/tvm/issues/1170
>> > The idea is to keep an active list of roadmaps that can be turned
>> directly
>> > into a release note. Public roadmap helps to encourage general
>> > participation
>> > from all contributors.
>> >
>> > Example 1:
>> > Recently a major proposal in the community is to bring in a new
>> > high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673 The
>> > pull
>> > request: https://github.com/dmlc/tvm/pull/1672 Everyone who
>> participated
>> > in the
>> > RFC is invited to review the code as well - Follow up features are
>> > proposed as
>> > follow up RFCs.
>> >
>> > Example 2: Community guideline improvements
>> > RFC thread: https://github.com/dmlc/tvm/issues/2017
>> > Slack channel setup as per community suggestion, but still encourage
>> the
>> > community to only use it for quick communication and use publicly
>> archived
>> > channels for development: https://github.com/dmlc/tvm/issues/2174
>> >
>> > Example 3: Python3 timeline proposal
>> > RFC thread: https://github.com/dmlc/tvm/issues/1602
>> > Finished with the decision to respect backward compatibility and keep
>> > python2
>> > support.
>> >
>> > See
>> >
>> >
>> https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
>> > for a full list of RFCs.
>> >
>> >
>> > === Alignment ===
>> >
>> > TVM is useful for building deep learning deployment solutions. It is
>> > perhaps
>> > also the first Apache incubator proposal that includes both open source
>> > software
>> > and hardware system design.
>> >
>> > It has the potential to benefit existing related ML projects such as
>> MXNet,
>> > Singa, SystemML, and Mahout by providing powerful low-level primitives
>> for
>> > matrix operations.
>> >
>> >
>> > === Known Risks ===
>> >
>> > ==== Orphaned products ====
>> >
>> > The project has a diverse contributor base. As an example, the current
>> > committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
>> > Facebook,
>> > Ziosoft, Huawei. We are actively growing this list. Given that the
>> project
>> > has
>> > already been used in production, there is a minimum risk of the project
>> > being
>> > abandoned.
>> >
>> > ==== Inexperience with Open Source ====
>> >
>> > The TVM community has extensive experience in open source. Three of
>> > current five
>> > PMCs are already PPMCs of existing Apache projects. Over the course of
>> > development, the community already has a good way bringing RFCs,
>> > discussions and
>> > most importantly, welcoming new contributors in the Apache way.
>> >
>> > ==== Homogenous Developers ====
>> >
>> > The project has a diverse contributor base. As an example, the current
>> > committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei,
>> > Google,
>> > Facebook, Ziosoft. The community actively seeks to collaborative
>> broadly.
>> > The
>> > PMCs followed a principle to *only* nominate committers outside their
>> own
>> > organizations.
>> >
>> >
>> > === Reliance on Salaried Developers ===
>> >
>> > Most of the current committers are volunteers.
>> >
>> > === Relationships with Other Apache Products ===
>> >
>> > TVM can serve as a fundamental compiler stack for deep learning and
>> machine
>> > learning in general. We expect it can benefit projects like MXNet,
>> Spark,
>> > Flink,
>> > Mahout, and SystemML.
>> >
>> > === Documentation ===
>> >
>> > See https://tvm.ai/
>> >
>> > === Initial Source ===
>> >
>> > https://github.com/dmlc/tvm
>> >
>> > We plan to move our repository to
>> https://github.com/apache/incubator-tvm
>> >
>> >
>> > === Source and Intellectual Property Submission Plan ===
>> >
>> > TVM source code is available under Apache V2 license. We will work with
>> the
>> > committers to get ICLAs signed.
>> >
>> > === External Dependencies ===
>> >
>> > We put all the source level dependencies under
>> > https://github.com/dmlc/tvm/tree/master/3rdparty
>> >
>> > - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
>> > - dlpack (Apache2): https://github.com/dmlc/dlpack
>> > - HalideIR (MIT): https://github.com/dmlc/HalideIR
>> > - range(Unlicense): https://github.com/agauniyal/rang
>> > - Compiler-RT (BSD)
>> > - LLVM
>> >
>> > All of the current he dependencies are stable, which means that the
>> > current TVM
>> > repo is standalone and main development activities only happen at the
>> TVM
>> > repo.
>> > The dependencies are periodically updated in the rate about once a
>> month
>> > when
>> > necessary. For source level dependencies, we will always point to a
>> stable
>> > release version for software release in the future.
>> >
>> >
>> > === External Dependencies on DMLC projects ===
>> >
>> > There are three dependencies to dmlc projects in the 3rdparty. The
>> current
>> > proposal is to keep the current dependencies in the 3rdparty. We
>> elaborate
>> > on
>> > the background of these dependencies below:
>> >
>> > - dmlc-core: is a minimum module for logging and memory serialization.
>> It
>> > is
>> > currently used by projects including ApacheMXNet, TVM, and XGBoost. The
>> > project is relatively stable, with around one change a week(most recent
>> > changes comes from XGBoost project). TVM’s dependency on dmlc-core is
>> > minimum
>> > and only uses its feature for logging.
>> > - dlpack: is a minimum consensus standard for in-memory Tensor format.
>> It
>> > is
>> > currently used by PyTorch, ApacheMXNet, Chainer, and a few other
>> projects.
>> > - HalideIR: is a minimum IR data structure that is isolated from a fork
>> of
>> > Halide project. We keep the license to be MIT to respect the original
>> > license
>> > and its origin. A common consensus in the TVM project is that we keep
>> the
>> > old
>> > derived code in HalideIR (which are stable), and all new developments
>> > happen
>> > in the TVM repo.
>> >
>> > The main reason to propose keep these dependencies are:
>> > - Each of the dependencies has the user and developer community of its
>> own
>> > which is larger than the TVM community or different license options(MIT
>> in
>> > HalideIR)
>> > - These dependencies are stable and update at a monthly rate.
>> >
>> > While it is possible to fork the code in the tvm repo, given that the
>> > current
>> > tvm repo is self-contained, and community development is stand-alone,
>> we
>> > feel
>> > that there are have enough justifications to treat these as 3rdparty
>> > dependencies.
>> >
>> >
>> > === Required Resources ===
>> >
>> > ==== Mailing List: ====
>> > The usual mailing lists are expected to be set up when entering
>> incubation:
>> >
>> > * 

> private@.apache

>> > * 

> dev@.apache

>  , subscribe github issues.
>> > * 

> discuss-archive@.apache

> , Archive the discuss content of the
>> > discourse user forum
>> >
>> >
>> > Currently, we only use issues for developments and encourage community
>> to
>> > use
>> > discuss forums when possible. As a result, the current github issues
>> serves
>> > similar purposes as dev@, so we propose to subscribe github issues to
>> dev@
>> > after
>> > incubation.
>> >
>> > The current community use https://discuss.tvm.ai/ for general technical
>> > and
>> > support discussions. The community forum is maintained by PMCs. We
>> propose
>> > to
>> > continue to use the forum and archive the posts to an Apache mail-list.
>> We
>> > already have the mechanism to do so (see
>> > https://groups.google.com/forum/#!forum/tvm-discuss-archive)
>> >
>> >
>> >
>> > ==== Git Repositories: ====
>> >
>> > Upon entering incubation, we plan to transfer the existing repo from
>> > https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm.
>> >
>> >
>> >
>> >
>> > ==== Issue Tracking: ====
>> >
>> > TVM currently uses GitHub to track issues. We would like to continue to
>> do
>> > so
>> > while we discuss migration possibilities with the ASF Infra team.
>> >
>> > ==== URL: ====
>> >
>> > Current project website: https://tvm.ai/, as we proceed website will
>> > migrate to
>> > https://tvm.incubator.apache.org and hopefully https://tvm.apache.org
>> >
>> > === Initial Committers and PMCs ===
>> >
>> > As the project has already followed the Apache way of development(in
>> terms
>> > of
>> > meritocracy, community, and archive of public discussion). We plan to
>> > transition
>> > the current PMCs to PPMCs , and committers to apache committers. There
>> are
>> > also
>> > ongoing votes and discussions in the current tvm PMC private mail-list
>> > about new
>> > committers/PMCs(we also invited our tentative mentors as observers to
>> the
>> > mail-list). We plan to migrate the discussions to private@ after the
>> > proposal
>> > has been accepted and bring in the new committers/PPMCs according to
>> the
>> > standard Apache community procedure.
>> >
>> >
>> > Initial PPMCs
>> > - Tianqi Chen 

> tqchen@

>> > - Ziheng Jiang 

> ziheng@

>> > - Yizhi Liu 

> liuyizhi@

>> > - Thierry Moreau 

> moreau@.washington

>> > - Haichen Shen 

> shenhaichen@

>> > - Lianmin Zheng 

> lianminzheng@

>> > - Markus Weimer 

> weimer@

>> > - Sebastian Schelter
>> > - Byung-Gon Chun
>> >
>> > Initial Committers (Including PPMCs)
>> > - Aditya Atluri 

> Aditya.Atluri@

>  AMD
>> > - Tianqi Chen 

> tqchen@

>  University of Washington
>> > - Yuwei Hu 

> huyuwei1995@

>  Cornell
>> > - Nick Hynes 

> nhynes@

>  UC Berkeley
>> > - Ziheng Jiang 

> ziheng@

>  University of Washington
>> > - Yizhi Liu 

> liuyizhi@

>  AWS
>> > - Thierry Moreau 

> moreau@.washington

>  University of Washington
>> > - Siva 

> srk.it38@

>  Huawei
>> > - Haichen Shen 

> shenhaichen@

>  AWS
>> > - Masahiro Masuda 

> masahi129@

>  Ziosoft
>> > - Zhixun Tan 

> phisiart@

>  Google
>> > - Leyuan Wang 

> laurawly@

>  AWS
>> > - Eddie Yan 

> eqy@.washington

>  University of Washington
>> > - Lianming Zheng 

> lianminzheng@

>  Shanghai Jiao Tong University
>> >
>> >
>> > === Sponsors: ===
>> >
>> > ==== Champion: ====
>> > * Markus Weimer, Microsoft
>> >
>> > ==== Mentors: ====
>> > * Sebastian Schelter, New York University
>> > * Byung-Gon Chun, Seoul National University
>> >
>> > ==== Sponsoring Entity ====
>> > We are requesting the Incubator to sponsor this project.
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: 

> general-unsubscribe@.apache

>> > For additional commands, e-mail: 

> general-help@.apache

>> >
>> >
>>





--
Sent from: http://apache-incubator-general.996316.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by Timothy Chen <tn...@apache.org>.
Very excited to see this proposed as well.

I’d also like to volunteer mentoring if the community is open too.

Tim

On Fri, Feb 15, 2019 at 10:48 Henry Saputra <he...@gmail.com> wrote:

> HI Markus,
>
> I have been using TVM as part of ML platform work as consumer of the
> project, this is great news!
>
> Would love to come in and help as a Mentor of this project if it is Ok with
> the community.
>
>
> Thanks,
>
> - Henry
>
> On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer <we...@apache.org> wrote:
>
> > Hi,
> >
> > we'd like to start the discussion of accepting TVM into the incubator.
> > Please see the proposal below. I'd like to highlight a few things for
> > our discussion:
> >
> > (1) The project already follows many Apache ways like meritocracy,
> > open development and such.
> >
> > (2) The project recognizes an in-between state of "reviewer" that it
> > nominates people for between contributor and committer status. We'd
> > like to learn if and how to maintain that in the future.
> >
> > (3) The project contains hardware as a software artifact. We are not
> > aware of another ASF project like that and wonder if and how it
> > affects its acceptance into the incubator.
> >
> > Thanks!
> >
> > Markus
> >
> > === Proposal ===
> >
> > We propose to incubate the TVM project the Apache Software Foundation.
> TVM
> > is a
> > full stack open deep learning compiler stack for CPUs, GPUs, and
> > specialized
> > accelerators. It aims to close the gap between the productivity-focused
> > deep
> > learning frameworks, and the performance- or efficiency-oriented hardware
> > backends.
> >
> > === Background ===
> >
> > There is an increasing need to bring machine learning to a wide diversity
> > of
> > hardware devices. Current frameworks rely on vendor-specific operator
> > libraries
> > and optimize for a narrow range of server-class GPUs. Deploying workloads
> > to new
> > platforms -- such as mobile phones, embedded devices, and accelerators
> > (e.g.,
> > FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end
> > deep
> > learning a compiler that exposes graph-level and operator-level
> > optimizations to
> > provide performance portability to deep learning workloads across diverse
> > hardware back-ends. TVM solves optimization challenges specific to deep
> > learning, such as high-level operator fusion, mapping to arbitrary
> hardware
> > primitives, and memory latency hiding. It also automates optimization of
> > low-level programs to hardware characteristics by employing a novel,
> > learning-based cost modeling method for rapid exploration of program
> > optimizations.
> >
> > Moreover, there is increasing interest in designing specialized hardware
> > which
> > accelerates machine learning. Towards this goal, TVM introduces VTA, an
> > open
> > source deep learning accelerator as part of its stack. The open source
> VTA
> > driver and hardware design is a crucial step toward building software
> > support
> > for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> > researchers and practitioners to explore specialized hardware designs.
> >
> >
> > === Rationale ===
> >
> > Deep learning compilation will be the next frontier of machine learning
> > systems.
> > TVM is already one of the leading open source projects pursuing this
> > direction.
> >
> > Specifically, TVM provides infrastructure to use machine learning to
> > automatically optimize deployment of deep learning programs on diverse
> > hardware
> > backends.
> >
> >
> > === VTA: Open Source Hardware Design ===
> >
> > TVM also contains open source hardware as part of its stack. The VTA
> > hardware
> > design is a fully open sourced deep learning accelerator that allows us
> to
> > experiment with compiler, driver, runtime, and execute the code on FPGA.
> > VTA
> > provides a path to target future ASICs, and build software-driven
> > solutions to
> > co-design future deep learning accelerators.
> >
> > Having an open source hardware design in an ASF project is rare and
> perhaps
> > unprecedented. We put some of our rationale on why it is necessary for
> the
> > community.
> >
> > Deep learning specialized ASICs are going to be at the center of the AI
> > revolution. However, given its early shape, there is no open standard, or
> > even
> > any available information hardware interface that allows an open source
> > software
> > to target to. VTA provides such open source hardware abstraction layer
> and
> > allows us to build in abstractions that can be effectively used to target
> > other
> > deep learning accelerators.
> >
> > Moreover, there is an increasing need for co-designing future of machine
> > learning systems with the hardware abstraction. Having a co-designed open
> > source
> > hardware stack along with the software creates a path for this route. In
> > short,
> > we need open-source hardware to build the best open source software.
> >
> > Finally, we can still view VTA design as “software”, as its source code
> is
> > written in source description language and can generate “binary” which
> can
> > run
> > on FPGA and possibly simulators.
> >
> >
> > === Current Status ===
> >
> > TVM is open sourced under the Apache License for one and half years. See
> > the
> > current project website (https://tvm.ai/), Github
> > (https://github.com/dmlc/tvm/), as well as TVM Conference
> > (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
> >
> > TVM has already been used in production, some highlights are AWS
> (Sagemaker
> > Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization). We
> > anticipate the list of adopters to grow over the next few years.
> >
> > === Meritocracy ===
> >
> > The TVM stack began as a research project of the SAMPL group at Paul G.
> > Allen
> > School of Computer Science & Engineering, University of Washington. The
> > project
> > is now driven by an open source community involving multiple industry and
> > academic institutions. The project is currently governed by the Apache
> Way
> > (https://docs.tvm.ai/contribute/community.html). The project now has 14
> > committers and 6 PMCs, and the list is actively growing. The PMCs uses a
> > google
> > group mail-list to vote in new committers/PMCs, which will be moved to
> > private@
> > after incubation.
> >
> > The community highly values open collaboration among contributors from
> > different
> > backgrounds.The current committers come from UW, Berkeley, Cornell, SJTU,
> > AMD,
> > AWS, Huawei, Google, Facebook, Ziosoft.
> >
> >
> > === Community ===
> >
> > The project currently has 173 contributors. As per the Apache way, all
> the
> > discussions are conducted in publicly archivable places.
> >
> > - Github issues are used to track development activities and RFC.
> > - The roadmap is public and encourages participation from everyone in the
> > community.
> > - Discussion forums for general discussions. https://discuss.tvm.ai
> > - The content of the discourse forum can be considered as a public
> archive
> > as it is searchable with all the content
> > - We also created a mail-list archive of the forum, which we will forward
> > to
> > an Apache mail-list after incubation
> > https://groups.google.com/forum/#!forum/tvm-discuss-archive
> >
> > - See https://tvm.ai/community
> > - See https://github.com/dmlc/tvm/releases for past releases.
> >
> > Currently, Github issue serves as dev@ channel. Notably, major features
> > always
> > start from RFCs discussions to encourage broad participation in the
> > community.
> >
> > The community recognizes potential committers early by bringing
> > contributors as
> > code reviewers and encourages them to participate in code reviews. Code
> > reviews
> > and high-quality code are fundamental to the long-term success of the
> > project.
> > The reviewer mechanism in the community serves a way to highlight this
> > aspect as
> > well as helping the community find good candidates to promote to
> > committers.
> >
> >
> >
> > ==== Development and Decision Process ====
> >
> > See
> >
> https://docs.tvm.ai/contribute/community.html#general-development-process
> > for the current development guideline. The key points are: Open public
> > roadmap
> > during development, which turns into release notes Major features start
> > with an
> > RFC, everything happens in public Encourage public discussion via
> > archivable
> > channels Strive to reach a consensus on technical decisions through
> > discussion
> > Moderation from committers and encourage everyone’s participation
> >
> > Example Roadmap: https://github.com/dmlc/tvm/issues/1170
> > The idea is to keep an active list of roadmaps that can be turned
> directly
> > into a release note. Public roadmap helps to encourage general
> > participation
> > from all contributors.
> >
> > Example 1:
> > Recently a major proposal in the community is to bring in a new
> > high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673 The
> > pull
> > request: https://github.com/dmlc/tvm/pull/1672 Everyone who participated
> > in the
> > RFC is invited to review the code as well - Follow up features are
> > proposed as
> > follow up RFCs.
> >
> > Example 2: Community guideline improvements
> > RFC thread: https://github.com/dmlc/tvm/issues/2017
> > Slack channel setup as per community suggestion, but still encourage the
> > community to only use it for quick communication and use publicly
> archived
> > channels for development: https://github.com/dmlc/tvm/issues/2174
> >
> > Example 3: Python3 timeline proposal
> > RFC thread: https://github.com/dmlc/tvm/issues/1602
> > Finished with the decision to respect backward compatibility and keep
> > python2
> > support.
> >
> > See
> >
> >
> https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
> > for a full list of RFCs.
> >
> >
> > === Alignment ===
> >
> > TVM is useful for building deep learning deployment solutions. It is
> > perhaps
> > also the first Apache incubator proposal that includes both open source
> > software
> > and hardware system design.
> >
> > It has the potential to benefit existing related ML projects such as
> MXNet,
> > Singa, SystemML, and Mahout by providing powerful low-level primitives
> for
> > matrix operations.
> >
> >
> > === Known Risks ===
> >
> > ==== Orphaned products ====
> >
> > The project has a diverse contributor base. As an example, the current
> > committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
> > Facebook,
> > Ziosoft, Huawei. We are actively growing this list. Given that the
> project
> > has
> > already been used in production, there is a minimum risk of the project
> > being
> > abandoned.
> >
> > ==== Inexperience with Open Source ====
> >
> > The TVM community has extensive experience in open source. Three of
> > current five
> > PMCs are already PPMCs of existing Apache projects. Over the course of
> > development, the community already has a good way bringing RFCs,
> > discussions and
> > most importantly, welcoming new contributors in the Apache way.
> >
> > ==== Homogenous Developers ====
> >
> > The project has a diverse contributor base. As an example, the current
> > committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei,
> > Google,
> > Facebook, Ziosoft. The community actively seeks to collaborative broadly.
> > The
> > PMCs followed a principle to *only* nominate committers outside their own
> > organizations.
> >
> >
> > === Reliance on Salaried Developers ===
> >
> > Most of the current committers are volunteers.
> >
> > === Relationships with Other Apache Products ===
> >
> > TVM can serve as a fundamental compiler stack for deep learning and
> machine
> > learning in general. We expect it can benefit projects like MXNet, Spark,
> > Flink,
> > Mahout, and SystemML.
> >
> > === Documentation ===
> >
> > See https://tvm.ai/
> >
> > === Initial Source ===
> >
> > https://github.com/dmlc/tvm
> >
> > We plan to move our repository to
> https://github.com/apache/incubator-tvm
> >
> >
> > === Source and Intellectual Property Submission Plan ===
> >
> > TVM source code is available under Apache V2 license. We will work with
> the
> > committers to get ICLAs signed.
> >
> > === External Dependencies ===
> >
> > We put all the source level dependencies under
> > https://github.com/dmlc/tvm/tree/master/3rdparty
> >
> > - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
> > - dlpack (Apache2): https://github.com/dmlc/dlpack
> > - HalideIR (MIT): https://github.com/dmlc/HalideIR
> > - range(Unlicense): https://github.com/agauniyal/rang
> > - Compiler-RT (BSD)
> > - LLVM
> >
> > All of the current he dependencies are stable, which means that the
> > current TVM
> > repo is standalone and main development activities only happen at the TVM
> > repo.
> > The dependencies are periodically updated in the rate about once a month
> > when
> > necessary. For source level dependencies, we will always point to a
> stable
> > release version for software release in the future.
> >
> >
> > === External Dependencies on DMLC projects ===
> >
> > There are three dependencies to dmlc projects in the 3rdparty. The
> current
> > proposal is to keep the current dependencies in the 3rdparty. We
> elaborate
> > on
> > the background of these dependencies below:
> >
> > - dmlc-core: is a minimum module for logging and memory serialization. It
> > is
> > currently used by projects including ApacheMXNet, TVM, and XGBoost. The
> > project is relatively stable, with around one change a week(most recent
> > changes comes from XGBoost project). TVM’s dependency on dmlc-core is
> > minimum
> > and only uses its feature for logging.
> > - dlpack: is a minimum consensus standard for in-memory Tensor format. It
> > is
> > currently used by PyTorch, ApacheMXNet, Chainer, and a few other
> projects.
> > - HalideIR: is a minimum IR data structure that is isolated from a fork
> of
> > Halide project. We keep the license to be MIT to respect the original
> > license
> > and its origin. A common consensus in the TVM project is that we keep the
> > old
> > derived code in HalideIR (which are stable), and all new developments
> > happen
> > in the TVM repo.
> >
> > The main reason to propose keep these dependencies are:
> > - Each of the dependencies has the user and developer community of its
> own
> > which is larger than the TVM community or different license options(MIT
> in
> > HalideIR)
> > - These dependencies are stable and update at a monthly rate.
> >
> > While it is possible to fork the code in the tvm repo, given that the
> > current
> > tvm repo is self-contained, and community development is stand-alone, we
> > feel
> > that there are have enough justifications to treat these as 3rdparty
> > dependencies.
> >
> >
> > === Required Resources ===
> >
> > ==== Mailing List: ====
> > The usual mailing lists are expected to be set up when entering
> incubation:
> >
> > * private@tvm.apache.org
> > * dev@tvm.apache.org , subscribe github issues.
> > * discuss-archive@tvm.apache.org, Archive the discuss content of the
> > discourse user forum
> >
> >
> > Currently, we only use issues for developments and encourage community to
> > use
> > discuss forums when possible. As a result, the current github issues
> serves
> > similar purposes as dev@, so we propose to subscribe github issues to
> dev@
> > after
> > incubation.
> >
> > The current community use https://discuss.tvm.ai/ for general technical
> > and
> > support discussions. The community forum is maintained by PMCs. We
> propose
> > to
> > continue to use the forum and archive the posts to an Apache mail-list.
> We
> > already have the mechanism to do so (see
> > https://groups.google.com/forum/#!forum/tvm-discuss-archive)
> >
> >
> >
> > ==== Git Repositories: ====
> >
> > Upon entering incubation, we plan to transfer the existing repo from
> > https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm.
> >
> >
> >
> >
> > ==== Issue Tracking: ====
> >
> > TVM currently uses GitHub to track issues. We would like to continue to
> do
> > so
> > while we discuss migration possibilities with the ASF Infra team.
> >
> > ==== URL: ====
> >
> > Current project website: https://tvm.ai/, as we proceed website will
> > migrate to
> > https://tvm.incubator.apache.org and hopefully https://tvm.apache.org
> >
> > === Initial Committers and PMCs ===
> >
> > As the project has already followed the Apache way of development(in
> terms
> > of
> > meritocracy, community, and archive of public discussion). We plan to
> > transition
> > the current PMCs to PPMCs , and committers to apache committers. There
> are
> > also
> > ongoing votes and discussions in the current tvm PMC private mail-list
> > about new
> > committers/PMCs(we also invited our tentative mentors as observers to the
> > mail-list). We plan to migrate the discussions to private@ after the
> > proposal
> > has been accepted and bring in the new committers/PPMCs according to the
> > standard Apache community procedure.
> >
> >
> > Initial PPMCs
> > - Tianqi Chen tqchen@apache.org
> > - Ziheng Jiang ziheng@apache.org
> > - Yizhi Liu liuyizhi@apache.org
> > - Thierry Moreau moreau@cs.washington.edu
> > - Haichen Shen shenhaichen@gmail.com
> > - Lianmin Zheng lianminzheng@gmail.com
> > - Markus Weimer weimer@apache.org
> > - Sebastian Schelter
> > - Byung-Gon Chun
> >
> > Initial Committers (Including PPMCs)
> > - Aditya Atluri Aditya.Atluri@amd.com AMD
> > - Tianqi Chen tqchen@apache.org University of Washington
> > - Yuwei Hu huyuwei1995@gmail.com Cornell
> > - Nick Hynes nhynes@berkeley.edu UC Berkeley
> > - Ziheng Jiang ziheng@apache.org University of Washington
> > - Yizhi Liu liuyizhi@apache.org AWS
> > - Thierry Moreau moreau@cs.washington.edu University of Washington
> > - Siva srk.it38@gmail.com Huawei
> > - Haichen Shen shenhaichen@gmail.com AWS
> > - Masahiro Masuda masahi129@gmail.com Ziosoft
> > - Zhixun Tan phisiart@gmail.com Google
> > - Leyuan Wang laurawly@gmail.com AWS
> > - Eddie Yan eqy@cs.washington.edu University of Washington
> > - Lianming Zheng lianminzheng@gmail.com Shanghai Jiao Tong University
> >
> >
> > === Sponsors: ===
> >
> > ==== Champion: ====
> > * Markus Weimer, Microsoft
> >
> > ==== Mentors: ====
> > * Sebastian Schelter, New York University
> > * Byung-Gon Chun, Seoul National University
> >
> > ==== Sponsoring Entity ====
> > We are requesting the Incubator to sponsor this project.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: general-help@incubator.apache.org
> >
> >
>

Re: [Proposal] Apache TVM

Posted by Henry Saputra <he...@gmail.com>.
HI Markus,

I have been using TVM as part of ML platform work as consumer of the
project, this is great news!

Would love to come in and help as a Mentor of this project if it is Ok with
the community.


Thanks,

- Henry

On Fri, Feb 15, 2019 at 10:42 AM Markus Weimer <we...@apache.org> wrote:

> Hi,
>
> we'd like to start the discussion of accepting TVM into the incubator.
> Please see the proposal below. I'd like to highlight a few things for
> our discussion:
>
> (1) The project already follows many Apache ways like meritocracy,
> open development and such.
>
> (2) The project recognizes an in-between state of "reviewer" that it
> nominates people for between contributor and committer status. We'd
> like to learn if and how to maintain that in the future.
>
> (3) The project contains hardware as a software artifact. We are not
> aware of another ASF project like that and wonder if and how it
> affects its acceptance into the incubator.
>
> Thanks!
>
> Markus
>
> === Proposal ===
>
> We propose to incubate the TVM project the Apache Software Foundation. TVM
> is a
> full stack open deep learning compiler stack for CPUs, GPUs, and
> specialized
> accelerators. It aims to close the gap between the productivity-focused
> deep
> learning frameworks, and the performance- or efficiency-oriented hardware
> backends.
>
> === Background ===
>
> There is an increasing need to bring machine learning to a wide diversity
> of
> hardware devices. Current frameworks rely on vendor-specific operator
> libraries
> and optimize for a narrow range of server-class GPUs. Deploying workloads
> to new
> platforms -- such as mobile phones, embedded devices, and accelerators
> (e.g.,
> FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end
> deep
> learning a compiler that exposes graph-level and operator-level
> optimizations to
> provide performance portability to deep learning workloads across diverse
> hardware back-ends. TVM solves optimization challenges specific to deep
> learning, such as high-level operator fusion, mapping to arbitrary hardware
> primitives, and memory latency hiding. It also automates optimization of
> low-level programs to hardware characteristics by employing a novel,
> learning-based cost modeling method for rapid exploration of program
> optimizations.
>
> Moreover, there is increasing interest in designing specialized hardware
> which
> accelerates machine learning. Towards this goal, TVM introduces VTA, an
> open
> source deep learning accelerator as part of its stack. The open source VTA
> driver and hardware design is a crucial step toward building software
> support
> for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> researchers and practitioners to explore specialized hardware designs.
>
>
> === Rationale ===
>
> Deep learning compilation will be the next frontier of machine learning
> systems.
> TVM is already one of the leading open source projects pursuing this
> direction.
>
> Specifically, TVM provides infrastructure to use machine learning to
> automatically optimize deployment of deep learning programs on diverse
> hardware
> backends.
>
>
> === VTA: Open Source Hardware Design ===
>
> TVM also contains open source hardware as part of its stack. The VTA
> hardware
> design is a fully open sourced deep learning accelerator that allows us to
> experiment with compiler, driver, runtime, and execute the code on FPGA.
> VTA
> provides a path to target future ASICs, and build software-driven
> solutions to
> co-design future deep learning accelerators.
>
> Having an open source hardware design in an ASF project is rare and perhaps
> unprecedented. We put some of our rationale on why it is necessary for the
> community.
>
> Deep learning specialized ASICs are going to be at the center of the AI
> revolution. However, given its early shape, there is no open standard, or
> even
> any available information hardware interface that allows an open source
> software
> to target to. VTA provides such open source hardware abstraction layer and
> allows us to build in abstractions that can be effectively used to target
> other
> deep learning accelerators.
>
> Moreover, there is an increasing need for co-designing future of machine
> learning systems with the hardware abstraction. Having a co-designed open
> source
> hardware stack along with the software creates a path for this route. In
> short,
> we need open-source hardware to build the best open source software.
>
> Finally, we can still view VTA design as “software”, as its source code is
> written in source description language and can generate “binary” which can
> run
> on FPGA and possibly simulators.
>
>
> === Current Status ===
>
> TVM is open sourced under the Apache License for one and half years. See
> the
> current project website (https://tvm.ai/), Github
> (https://github.com/dmlc/tvm/), as well as TVM Conference
> (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
>
> TVM has already been used in production, some highlights are AWS (Sagemaker
> Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization). We
> anticipate the list of adopters to grow over the next few years.
>
> === Meritocracy ===
>
> The TVM stack began as a research project of the SAMPL group at Paul G.
> Allen
> School of Computer Science & Engineering, University of Washington. The
> project
> is now driven by an open source community involving multiple industry and
> academic institutions. The project is currently governed by the Apache Way
> (https://docs.tvm.ai/contribute/community.html). The project now has 14
> committers and 6 PMCs, and the list is actively growing. The PMCs uses a
> google
> group mail-list to vote in new committers/PMCs, which will be moved to
> private@
> after incubation.
>
> The community highly values open collaboration among contributors from
> different
> backgrounds.The current committers come from UW, Berkeley, Cornell, SJTU,
> AMD,
> AWS, Huawei, Google, Facebook, Ziosoft.
>
>
> === Community ===
>
> The project currently has 173 contributors. As per the Apache way, all the
> discussions are conducted in publicly archivable places.
>
> - Github issues are used to track development activities and RFC.
> - The roadmap is public and encourages participation from everyone in the
> community.
> - Discussion forums for general discussions. https://discuss.tvm.ai
> - The content of the discourse forum can be considered as a public archive
> as it is searchable with all the content
> - We also created a mail-list archive of the forum, which we will forward
> to
> an Apache mail-list after incubation
> https://groups.google.com/forum/#!forum/tvm-discuss-archive
>
> - See https://tvm.ai/community
> - See https://github.com/dmlc/tvm/releases for past releases.
>
> Currently, Github issue serves as dev@ channel. Notably, major features
> always
> start from RFCs discussions to encourage broad participation in the
> community.
>
> The community recognizes potential committers early by bringing
> contributors as
> code reviewers and encourages them to participate in code reviews. Code
> reviews
> and high-quality code are fundamental to the long-term success of the
> project.
> The reviewer mechanism in the community serves a way to highlight this
> aspect as
> well as helping the community find good candidates to promote to
> committers.
>
>
>
> ==== Development and Decision Process ====
>
> See
> https://docs.tvm.ai/contribute/community.html#general-development-process
> for the current development guideline. The key points are: Open public
> roadmap
> during development, which turns into release notes Major features start
> with an
> RFC, everything happens in public Encourage public discussion via
> archivable
> channels Strive to reach a consensus on technical decisions through
> discussion
> Moderation from committers and encourage everyone’s participation
>
> Example Roadmap: https://github.com/dmlc/tvm/issues/1170
> The idea is to keep an active list of roadmaps that can be turned directly
> into a release note. Public roadmap helps to encourage general
> participation
> from all contributors.
>
> Example 1:
> Recently a major proposal in the community is to bring in a new
> high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673 The
> pull
> request: https://github.com/dmlc/tvm/pull/1672 Everyone who participated
> in the
> RFC is invited to review the code as well - Follow up features are
> proposed as
> follow up RFCs.
>
> Example 2: Community guideline improvements
> RFC thread: https://github.com/dmlc/tvm/issues/2017
> Slack channel setup as per community suggestion, but still encourage the
> community to only use it for quick communication and use publicly archived
> channels for development: https://github.com/dmlc/tvm/issues/2174
>
> Example 3: Python3 timeline proposal
> RFC thread: https://github.com/dmlc/tvm/issues/1602
> Finished with the decision to respect backward compatibility and keep
> python2
> support.
>
> See
>
> https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
> for a full list of RFCs.
>
>
> === Alignment ===
>
> TVM is useful for building deep learning deployment solutions. It is
> perhaps
> also the first Apache incubator proposal that includes both open source
> software
> and hardware system design.
>
> It has the potential to benefit existing related ML projects such as MXNet,
> Singa, SystemML, and Mahout by providing powerful low-level primitives for
> matrix operations.
>
>
> === Known Risks ===
>
> ==== Orphaned products ====
>
> The project has a diverse contributor base. As an example, the current
> committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
> Facebook,
> Ziosoft, Huawei. We are actively growing this list. Given that the project
> has
> already been used in production, there is a minimum risk of the project
> being
> abandoned.
>
> ==== Inexperience with Open Source ====
>
> The TVM community has extensive experience in open source. Three of
> current five
> PMCs are already PPMCs of existing Apache projects. Over the course of
> development, the community already has a good way bringing RFCs,
> discussions and
> most importantly, welcoming new contributors in the Apache way.
>
> ==== Homogenous Developers ====
>
> The project has a diverse contributor base. As an example, the current
> committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei,
> Google,
> Facebook, Ziosoft. The community actively seeks to collaborative broadly.
> The
> PMCs followed a principle to *only* nominate committers outside their own
> organizations.
>
>
> === Reliance on Salaried Developers ===
>
> Most of the current committers are volunteers.
>
> === Relationships with Other Apache Products ===
>
> TVM can serve as a fundamental compiler stack for deep learning and machine
> learning in general. We expect it can benefit projects like MXNet, Spark,
> Flink,
> Mahout, and SystemML.
>
> === Documentation ===
>
> See https://tvm.ai/
>
> === Initial Source ===
>
> https://github.com/dmlc/tvm
>
> We plan to move our repository to https://github.com/apache/incubator-tvm
>
>
> === Source and Intellectual Property Submission Plan ===
>
> TVM source code is available under Apache V2 license. We will work with the
> committers to get ICLAs signed.
>
> === External Dependencies ===
>
> We put all the source level dependencies under
> https://github.com/dmlc/tvm/tree/master/3rdparty
>
> - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
> - dlpack (Apache2): https://github.com/dmlc/dlpack
> - HalideIR (MIT): https://github.com/dmlc/HalideIR
> - range(Unlicense): https://github.com/agauniyal/rang
> - Compiler-RT (BSD)
> - LLVM
>
> All of the current he dependencies are stable, which means that the
> current TVM
> repo is standalone and main development activities only happen at the TVM
> repo.
> The dependencies are periodically updated in the rate about once a month
> when
> necessary. For source level dependencies, we will always point to a stable
> release version for software release in the future.
>
>
> === External Dependencies on DMLC projects ===
>
> There are three dependencies to dmlc projects in the 3rdparty. The current
> proposal is to keep the current dependencies in the 3rdparty. We elaborate
> on
> the background of these dependencies below:
>
> - dmlc-core: is a minimum module for logging and memory serialization. It
> is
> currently used by projects including ApacheMXNet, TVM, and XGBoost. The
> project is relatively stable, with around one change a week(most recent
> changes comes from XGBoost project). TVM’s dependency on dmlc-core is
> minimum
> and only uses its feature for logging.
> - dlpack: is a minimum consensus standard for in-memory Tensor format. It
> is
> currently used by PyTorch, ApacheMXNet, Chainer, and a few other projects.
> - HalideIR: is a minimum IR data structure that is isolated from a fork of
> Halide project. We keep the license to be MIT to respect the original
> license
> and its origin. A common consensus in the TVM project is that we keep the
> old
> derived code in HalideIR (which are stable), and all new developments
> happen
> in the TVM repo.
>
> The main reason to propose keep these dependencies are:
> - Each of the dependencies has the user and developer community of its own
> which is larger than the TVM community or different license options(MIT in
> HalideIR)
> - These dependencies are stable and update at a monthly rate.
>
> While it is possible to fork the code in the tvm repo, given that the
> current
> tvm repo is self-contained, and community development is stand-alone, we
> feel
> that there are have enough justifications to treat these as 3rdparty
> dependencies.
>
>
> === Required Resources ===
>
> ==== Mailing List: ====
> The usual mailing lists are expected to be set up when entering incubation:
>
> * private@tvm.apache.org
> * dev@tvm.apache.org , subscribe github issues.
> * discuss-archive@tvm.apache.org, Archive the discuss content of the
> discourse user forum
>
>
> Currently, we only use issues for developments and encourage community to
> use
> discuss forums when possible. As a result, the current github issues serves
> similar purposes as dev@, so we propose to subscribe github issues to dev@
> after
> incubation.
>
> The current community use https://discuss.tvm.ai/ for general technical
> and
> support discussions. The community forum is maintained by PMCs. We propose
> to
> continue to use the forum and archive the posts to an Apache mail-list. We
> already have the mechanism to do so (see
> https://groups.google.com/forum/#!forum/tvm-discuss-archive)
>
>
>
> ==== Git Repositories: ====
>
> Upon entering incubation, we plan to transfer the existing repo from
> https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm.
>
>
>
>
> ==== Issue Tracking: ====
>
> TVM currently uses GitHub to track issues. We would like to continue to do
> so
> while we discuss migration possibilities with the ASF Infra team.
>
> ==== URL: ====
>
> Current project website: https://tvm.ai/, as we proceed website will
> migrate to
> https://tvm.incubator.apache.org and hopefully https://tvm.apache.org
>
> === Initial Committers and PMCs ===
>
> As the project has already followed the Apache way of development(in terms
> of
> meritocracy, community, and archive of public discussion). We plan to
> transition
> the current PMCs to PPMCs , and committers to apache committers. There are
> also
> ongoing votes and discussions in the current tvm PMC private mail-list
> about new
> committers/PMCs(we also invited our tentative mentors as observers to the
> mail-list). We plan to migrate the discussions to private@ after the
> proposal
> has been accepted and bring in the new committers/PPMCs according to the
> standard Apache community procedure.
>
>
> Initial PPMCs
> - Tianqi Chen tqchen@apache.org
> - Ziheng Jiang ziheng@apache.org
> - Yizhi Liu liuyizhi@apache.org
> - Thierry Moreau moreau@cs.washington.edu
> - Haichen Shen shenhaichen@gmail.com
> - Lianmin Zheng lianminzheng@gmail.com
> - Markus Weimer weimer@apache.org
> - Sebastian Schelter
> - Byung-Gon Chun
>
> Initial Committers (Including PPMCs)
> - Aditya Atluri Aditya.Atluri@amd.com AMD
> - Tianqi Chen tqchen@apache.org University of Washington
> - Yuwei Hu huyuwei1995@gmail.com Cornell
> - Nick Hynes nhynes@berkeley.edu UC Berkeley
> - Ziheng Jiang ziheng@apache.org University of Washington
> - Yizhi Liu liuyizhi@apache.org AWS
> - Thierry Moreau moreau@cs.washington.edu University of Washington
> - Siva srk.it38@gmail.com Huawei
> - Haichen Shen shenhaichen@gmail.com AWS
> - Masahiro Masuda masahi129@gmail.com Ziosoft
> - Zhixun Tan phisiart@gmail.com Google
> - Leyuan Wang laurawly@gmail.com AWS
> - Eddie Yan eqy@cs.washington.edu University of Washington
> - Lianming Zheng lianminzheng@gmail.com Shanghai Jiao Tong University
>
>
> === Sponsors: ===
>
> ==== Champion: ====
> * Markus Weimer, Microsoft
>
> ==== Mentors: ====
> * Sebastian Schelter, New York University
> * Byung-Gon Chun, Seoul National University
>
> ==== Sponsoring Entity ====
> We are requesting the Incubator to sponsor this project.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> For additional commands, e-mail: general-help@incubator.apache.org
>
>

Re: [Proposal] Apache TVM

Posted by Greg Stein <gs...@gmail.com>.
On Fri, Feb 15, 2019 at 12:42 PM Markus Weimer <we...@apache.org> wrote:
>...

> === Meritocracy ===
>
> The TVM stack began as a research project of the SAMPL group at Paul G.
> Allen
> School of Computer Science & Engineering, University of Washington. The
> project
> is now driven by an open source community involving multiple industry and
> academic institutions. The project is currently governed by the Apache Way
> (https://docs.tvm.ai/contribute/community.html). The project now has 14
> committers and 6 PMCs, and the list is actively growing. The PMCs uses a
> google
> group mail-list to vote in new committers/PMCs, which will be moved to
> private@
> after incubation.
>

I've seen people misuse the "PMC" acronym elsewhere, and I'd hope to nip
this in the bud, right now.

"PMC" stands for "Project Management Committee".

Not a person. There are PMC Members. Those persons are not "PMCs". The
Foundation has nearly 200 PMCs, comprised of many hundreds of PMC Members.
And by extension PPMC Members, not "PPMCs".

Regards,
-g

Re: [Proposal] Apache TVM

Posted by Byung-Gon Chun <bg...@gmail.com>.
I'm very excited to see this proposal!

-Gon

On Mon, Feb 18, 2019 at 9:06 AM Sebastian <ss...@apache.org> wrote:

> I have also volunteered as a potential mentor for the TVM project and I
> am very excited about it :)
>
> Best,
> Sebastian
>
> On 17.02.19 09:02, Kevin A. McGrail wrote:
> > +1 binding with a caveat:
> >
> > You need mentors and champions from Apache who are available and ideally
> > active in the incubator.  Markus had to step down on hivemail last
> > year.  Has his situation changed?
> >
> > Some comments:
> > The hardware artifacts being donated is interesting and something I
> > would support helping.  We might want to loop in the secretary and legal
> > vps to discuss.
> >
> > The reviewer status is something the pmc can elect to do.  They might
> > end up with the same karma as committers on any repos if they need that
> > karma is the only hurdle I can think of.  But we like a model of trust
> > for people so it should be a good thing.
> >
> > But otherwise looks like a great start!
> > KAM
> > On Fri, Feb 15, 2019, 13:42 Markus Weimer <weimer@apache.org
> > <ma...@apache.org> wrote:
> >
> >     Hi,
> >
> >     we'd like to start the discussion of accepting TVM into the
> incubator.
> >     Please see the proposal below. I'd like to highlight a few things for
> >     our discussion:
> >
> >     (1) The project already follows many Apache ways like meritocracy,
> >     open development and such.
> >
> >     (2) The project recognizes an in-between state of "reviewer" that it
> >     nominates people for between contributor and committer status. We'd
> >     like to learn if and how to maintain that in the future.
> >
> >     (3) The project contains hardware as a software artifact. We are not
> >     aware of another ASF project like that and wonder if and how it
> >     affects its acceptance into the incubator.
> >
> >     Thanks!
> >
> >     Markus
> >
> >     === Proposal ===
> >
> >     We propose to incubate the TVM project the Apache Software
> >     Foundation. TVM is a
> >     full stack open deep learning compiler stack for CPUs, GPUs, and
> >     specialized
> >     accelerators. It aims to close the gap between the
> >     productivity-focused deep
> >     learning frameworks, and the performance- or efficiency-oriented
> >     hardware
> >     backends.
> >
> >     === Background ===
> >
> >     There is an increasing need to bring machine learning to a wide
> >     diversity of
> >     hardware devices. Current frameworks rely on vendor-specific
> >     operator libraries
> >     and optimize for a narrow range of server-class GPUs. Deploying
> >     workloads to new
> >     platforms -- such as mobile phones, embedded devices, and
> >     accelerators (e.g.,
> >     FPGAs, ASICs) -- requires significant manual effort. TVM is an end
> >     to end deep
> >     learning a compiler that exposes graph-level and operator-level
> >     optimizations to
> >     provide performance portability to deep learning workloads across
> >     diverse
> >     hardware back-ends. TVM solves optimization challenges specific to
> deep
> >     learning, such as high-level operator fusion, mapping to arbitrary
> >     hardware
> >     primitives, and memory latency hiding. It also automates
> optimization of
> >     low-level programs to hardware characteristics by employing a novel,
> >     learning-based cost modeling method for rapid exploration of program
> >     optimizations.
> >
> >     Moreover, there is increasing interest in designing specialized
> >     hardware which
> >     accelerates machine learning. Towards this goal, TVM introduces VTA,
> >     an open
> >     source deep learning accelerator as part of its stack. The open
> >     source VTA
> >     driver and hardware design is a crucial step toward building
> >     software support
> >     for future ASICs. The TVM-VTA flow acts as a is the great frontier
> for
> >     researchers and practitioners to explore specialized hardware
> designs.
> >
> >
> >     === Rationale ===
> >
> >     Deep learning compilation will be the next frontier of machine
> >     learning systems.
> >     TVM is already one of the leading open source projects pursuing this
> >     direction.
> >
> >     Specifically, TVM provides infrastructure to use machine learning to
> >     automatically optimize deployment of deep learning programs on
> >     diverse hardware
> >     backends.
> >
> >
> >     === VTA: Open Source Hardware Design ===
> >
> >     TVM also contains open source hardware as part of its stack. The VTA
> >     hardware
> >     design is a fully open sourced deep learning accelerator that allows
> >     us to
> >     experiment with compiler, driver, runtime, and execute the code on
> >     FPGA. VTA
> >     provides a path to target future ASICs, and build software-driven
> >     solutions to
> >     co-design future deep learning accelerators.
> >
> >     Having an open source hardware design in an ASF project is rare and
> >     perhaps
> >     unprecedented. We put some of our rationale on why it is necessary
> >     for the
> >     community.
> >
> >     Deep learning specialized ASICs are going to be at the center of the
> AI
> >     revolution. However, given its early shape, there is no open
> >     standard, or even
> >     any available information hardware interface that allows an open
> >     source software
> >     to target to. VTA provides such open source hardware abstraction
> >     layer and
> >     allows us to build in abstractions that can be effectively used to
> >     target other
> >     deep learning accelerators.
> >
> >     Moreover, there is an increasing need for co-designing future of
> machine
> >     learning systems with the hardware abstraction. Having a co-designed
> >     open source
> >     hardware stack along with the software creates a path for this
> >     route. In short,
> >     we need open-source hardware to build the best open source software.
> >
> >     Finally, we can still view VTA design as “software”, as its source
> >     code is
> >     written in source description language and can generate “binary”
> >     which can run
> >     on FPGA and possibly simulators.
> >
> >
> >     === Current Status ===
> >
> >     TVM is open sourced under the Apache License for one and half years.
> >     See the
> >     current project website (https://tvm.ai/), Github
> >     (https://github.com/dmlc/tvm/), as well as TVM Conference
> >     (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
> >
> >     TVM has already been used in production, some highlights are AWS
> >     (Sagemaker
> >     Neo), Huawei (AI Chip compilation) and Facebook (mobile
> >     optimization). We
> >     anticipate the list of adopters to grow over the next few years.
> >
> >     === Meritocracy ===
> >
> >     The TVM stack began as a research project of the SAMPL group at Paul
> >     G. Allen
> >     School of Computer Science & Engineering, University of Washington.
> >     The project
> >     is now driven by an open source community involving multiple
> >     industry and
> >     academic institutions. The project is currently governed by the
> >     Apache Way
> >     (https://docs.tvm.ai/contribute/community.html). The project now
> has 14
> >     committers and 6 PMCs, and the list is actively growing. The PMCs
> >     uses a google
> >     group mail-list to vote in new committers/PMCs, which will be moved
> >     to private@
> >     after incubation.
> >
> >     The community highly values open collaboration among contributors
> >     from different
> >     backgrounds.The current committers come from UW, Berkeley, Cornell,
> >     SJTU, AMD,
> >     AWS, Huawei, Google, Facebook, Ziosoft.
> >
> >
> >     === Community ===
> >
> >     The project currently has 173 contributors. As per the Apache way,
> >     all the
> >     discussions are conducted in publicly archivable places.
> >
> >     - Github issues are used to track development activities and RFC.
> >     - The roadmap is public and encourages participation from everyone
> >     in the
> >     community.
> >     - Discussion forums for general discussions. https://discuss.tvm.ai
> >     - The content of the discourse forum can be considered as a public
> >     archive
> >     as it is searchable with all the content
> >     - We also created a mail-list archive of the forum, which we will
> >     forward to
> >     an Apache mail-list after incubation
> >     https://groups.google.com/forum/#!forum/tvm-discuss-archive
> >
> >     - See https://tvm.ai/community
> >     - See https://github.com/dmlc/tvm/releases for past releases.
> >
> >     Currently, Github issue serves as dev@ channel. Notably, major
> >     features always
> >     start from RFCs discussions to encourage broad participation in the
> >     community.
> >
> >     The community recognizes potential committers early by bringing
> >     contributors as
> >     code reviewers and encourages them to participate in code reviews.
> >     Code reviews
> >     and high-quality code are fundamental to the long-term success of
> >     the project.
> >     The reviewer mechanism in the community serves a way to highlight
> >     this aspect as
> >     well as helping the community find good candidates to promote to
> >     committers.
> >
> >
> >
> >     ==== Development and Decision Process ====
> >
> >     See
> >
> https://docs.tvm.ai/contribute/community.html#general-development-process
> >     for the current development guideline. The key points are: Open
> >     public roadmap
> >     during development, which turns into release notes Major features
> >     start with an
> >     RFC, everything happens in public Encourage public discussion via
> >     archivable
> >     channels Strive to reach a consensus on technical decisions through
> >     discussion
> >     Moderation from committers and encourage everyone’s participation
> >
> >     Example Roadmap: https://github.com/dmlc/tvm/issues/1170
> >     The idea is to keep an active list of roadmaps that can be turned
> >     directly
> >     into a release note. Public roadmap helps to encourage general
> >     participation
> >     from all contributors.
> >
> >     Example 1:
> >     Recently a major proposal in the community is to bring in a new
> >     high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673
> >     The pull
> >     request: https://github.com/dmlc/tvm/pull/1672 Everyone who
> >     participated in the
> >     RFC is invited to review the code as well - Follow up features are
> >     proposed as
> >     follow up RFCs.
> >
> >     Example 2: Community guideline improvements
> >     RFC thread: https://github.com/dmlc/tvm/issues/2017
> >     Slack channel setup as per community suggestion, but still encourage
> the
> >     community to only use it for quick communication and use publicly
> >     archived
> >     channels for development: https://github.com/dmlc/tvm/issues/2174
> >
> >     Example 3: Python3 timeline proposal
> >     RFC thread: https://github.com/dmlc/tvm/issues/1602
> >     Finished with the decision to respect backward compatibility and
> >     keep python2
> >     support.
> >
> >     See
> >
> https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
> >     for a full list of RFCs.
> >
> >
> >     === Alignment ===
> >
> >     TVM is useful for building deep learning deployment solutions. It is
> >     perhaps
> >     also the first Apache incubator proposal that includes both open
> >     source software
> >     and hardware system design.
> >
> >     It has the potential to benefit existing related ML projects such as
> >     MXNet,
> >     Singa, SystemML, and Mahout by providing powerful low-level
> >     primitives for
> >     matrix operations.
> >
> >
> >     === Known Risks ===
> >
> >     ==== Orphaned products ====
> >
> >     The project has a diverse contributor base. As an example, the
> current
> >     committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
> >     Facebook,
> >     Ziosoft, Huawei. We are actively growing this list. Given that the
> >     project has
> >     already been used in production, there is a minimum risk of the
> >     project being
> >     abandoned.
> >
> >     ==== Inexperience with Open Source ====
> >
> >     The TVM community has extensive experience in open source. Three of
> >     current five
> >     PMCs are already PPMCs of existing Apache projects. Over the course
> of
> >     development, the community already has a good way bringing RFCs,
> >     discussions and
> >     most importantly, welcoming new contributors in the Apache way.
> >
> >     ==== Homogenous Developers ====
> >
> >     The project has a diverse contributor base. As an example, the
> current
> >     committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS,
> >     Huawei, Google,
> >     Facebook, Ziosoft. The community actively seeks to collaborative
> >     broadly. The
> >     PMCs followed a principle to *only* nominate committers outside
> >     their own
> >     organizations.
> >
> >
> >     === Reliance on Salaried Developers ===
> >
> >     Most of the current committers are volunteers.
> >
> >     === Relationships with Other Apache Products ===
> >
> >     TVM can serve as a fundamental compiler stack for deep learning and
> >     machine
> >     learning in general. We expect it can benefit projects like MXNet,
> >     Spark, Flink,
> >     Mahout, and SystemML.
> >
> >     === Documentation ===
> >
> >     See https://tvm.ai/
> >
> >     === Initial Source ===
> >
> >     https://github.com/dmlc/tvm
> >
> >     We plan to move our repository to
> >     https://github.com/apache/incubator-tvm
> >
> >
> >     === Source and Intellectual Property Submission Plan ===
> >
> >     TVM source code is available under Apache V2 license. We will work
> >     with the
> >     committers to get ICLAs signed.
> >
> >     === External Dependencies ===
> >
> >     We put all the source level dependencies under
> >     https://github.com/dmlc/tvm/tree/master/3rdparty
> >
> >     - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
> >     - dlpack (Apache2): https://github.com/dmlc/dlpack
> >     - HalideIR (MIT): https://github.com/dmlc/HalideIR
> >     - range(Unlicense): https://github.com/agauniyal/rang
> >     - Compiler-RT (BSD)
> >     - LLVM
> >
> >     All of the current he dependencies are stable, which means that the
> >     current TVM
> >     repo is standalone and main development activities only happen at
> >     the TVM repo.
> >     The dependencies are periodically updated in the rate about once a
> >     month when
> >     necessary. For source level dependencies, we will always point to a
> >     stable
> >     release version for software release in the future.
> >
> >
> >     === External Dependencies on DMLC projects ===
> >
> >     There are three dependencies to dmlc projects in the 3rdparty. The
> >     current
> >     proposal is to keep the current dependencies in the 3rdparty. We
> >     elaborate on
> >     the background of these dependencies below:
> >
> >     - dmlc-core: is a minimum module for logging and memory
> >     serialization. It is
> >     currently used by projects including ApacheMXNet, TVM, and XGBoost.
> The
> >     project is relatively stable, with around one change a week(most
> recent
> >     changes comes from XGBoost project). TVM’s dependency on dmlc-core
> >     is minimum
> >     and only uses its feature for logging.
> >     - dlpack: is a minimum consensus standard for in-memory Tensor
> >     format. It is
> >     currently used by PyTorch, ApacheMXNet, Chainer, and a few other
> >     projects.
> >     - HalideIR: is a minimum IR data structure that is isolated from a
> >     fork of
> >     Halide project. We keep the license to be MIT to respect the
> >     original license
> >     and its origin. A common consensus in the TVM project is that we
> >     keep the old
> >     derived code in HalideIR (which are stable), and all new
> >     developments happen
> >     in the TVM repo.
> >
> >     The main reason to propose keep these dependencies are:
> >     - Each of the dependencies has the user and developer community of
> >     its own
> >     which is larger than the TVM community or different license
> >     options(MIT in
> >     HalideIR)
> >     - These dependencies are stable and update at a monthly rate.
> >
> >     While it is possible to fork the code in the tvm repo, given that
> >     the current
> >     tvm repo is self-contained, and community development is
> >     stand-alone, we feel
> >     that there are have enough justifications to treat these as 3rdparty
> >     dependencies.
> >
> >
> >     === Required Resources ===
> >
> >     ==== Mailing List: ====
> >     The usual mailing lists are expected to be set up when entering
> >     incubation:
> >
> >     * private@tvm.apache.org <ma...@tvm.apache.org>
> >     * dev@tvm.apache.org <ma...@tvm.apache.org> , subscribe github
> >     issues.
> >     * discuss-archive@tvm.apache.org
> >     <ma...@tvm.apache.org>, Archive the discuss content
> >     of the
> >     discourse user forum
> >
> >
> >     Currently, we only use issues for developments and encourage
> >     community to use
> >     discuss forums when possible. As a result, the current github issues
> >     serves
> >     similar purposes as dev@, so we propose to subscribe github issues
> >     to dev@ after
> >     incubation.
> >
> >     The current community use https://discuss.tvm.ai/ for general
> >     technical and
> >     support discussions. The community forum is maintained by PMCs. We
> >     propose to
> >     continue to use the forum and archive the posts to an Apache
> >     mail-list. We
> >     already have the mechanism to do so (see
> >     https://groups.google.com/forum/#!forum/tvm-discuss-archive)
> >
> >
> >
> >     ==== Git Repositories: ====
> >
> >     Upon entering incubation, we plan to transfer the existing repo from
> >     https://github.com/dmlc/tvm to
> https://github.com/apache/incubator-tvm.
> >
> >
> >
> >
> >     ==== Issue Tracking: ====
> >
> >     TVM currently uses GitHub to track issues. We would like to continue
> >     to do so
> >     while we discuss migration possibilities with the ASF Infra team.
> >
> >     ==== URL: ====
> >
> >     Current project website: https://tvm.ai/, as we proceed website will
> >     migrate to
> >     https://tvm.incubator.apache.org and hopefully
> https://tvm.apache.org
> >
> >     === Initial Committers and PMCs ===
> >
> >     As the project has already followed the Apache way of development(in
> >     terms of
> >     meritocracy, community, and archive of public discussion). We plan
> >     to transition
> >     the current PMCs to PPMCs , and committers to apache committers.
> >     There are also
> >     ongoing votes and discussions in the current tvm PMC private
> >     mail-list about new
> >     committers/PMCs(we also invited our tentative mentors as observers
> >     to the
> >     mail-list). We plan to migrate the discussions to private@ after the
> >     proposal
> >     has been accepted and bring in the new committers/PPMCs according to
> the
> >     standard Apache community procedure.
> >
> >
> >     Initial PPMCs
> >     - Tianqi Chen tqchen@apache.org <ma...@apache.org>
> >     - Ziheng Jiang ziheng@apache.org <ma...@apache.org>
> >     - Yizhi Liu liuyizhi@apache.org <ma...@apache.org>
> >     - Thierry Moreau moreau@cs.washington.edu
> >     <ma...@cs.washington.edu>
> >     - Haichen Shen shenhaichen@gmail.com <ma...@gmail.com>
> >     - Lianmin Zheng lianminzheng@gmail.com <mailto:
> lianminzheng@gmail.com>
> >     - Markus Weimer weimer@apache.org <ma...@apache.org>
> >     - Sebastian Schelter
> >     - Byung-Gon Chun
> >
> >     Initial Committers (Including PPMCs)
> >     - Aditya Atluri Aditya.Atluri@amd.com <ma...@amd.com>
> AMD
> >     - Tianqi Chen tqchen@apache.org <ma...@apache.org>
> >     University of Washington
> >     - Yuwei Hu huyuwei1995@gmail.com <ma...@gmail.com>
> Cornell
> >     - Nick Hynes nhynes@berkeley.edu <ma...@berkeley.edu> UC
> >     Berkeley
> >     - Ziheng Jiang ziheng@apache.org <ma...@apache.org>
> >     University of Washington
> >     - Yizhi Liu liuyizhi@apache.org <ma...@apache.org> AWS
> >     - Thierry Moreau moreau@cs.washington.edu
> >     <ma...@cs.washington.edu> University of Washington
> >     - Siva srk.it38@gmail.com <ma...@gmail.com> Huawei
> >     - Haichen Shen shenhaichen@gmail.com <ma...@gmail.com>
> AWS
> >     - Masahiro Masuda masahi129@gmail.com <ma...@gmail.com>
> >     Ziosoft
> >     - Zhixun Tan phisiart@gmail.com <ma...@gmail.com> Google
> >     - Leyuan Wang laurawly@gmail.com <ma...@gmail.com> AWS
> >     - Eddie Yan eqy@cs.washington.edu <ma...@cs.washington.edu>
> >     University of Washington
> >     - Lianming Zheng lianminzheng@gmail.com
> >     <ma...@gmail.com> Shanghai Jiao Tong University
> >
> >
> >     === Sponsors: ===
> >
> >     ==== Champion: ====
> >     * Markus Weimer, Microsoft
> >
> >     ==== Mentors: ====
> >     * Sebastian Schelter, New York University
> >     * Byung-Gon Chun, Seoul National University
> >
> >     ==== Sponsoring Entity ====
> >     We are requesting the Incubator to sponsor this project.
> >
> >     ---------------------------------------------------------------------
> >     To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> >     <ma...@incubator.apache.org>
> >     For additional commands, e-mail: general-help@incubator.apache.org
> >     <ma...@incubator.apache.org>
> >
>


-- 
Byung-Gon Chun

Re: [Proposal] Apache TVM

Posted by Sebastian <ss...@apache.org>.
I have also volunteered as a potential mentor for the TVM project and I 
am very excited about it :)

Best,
Sebastian

On 17.02.19 09:02, Kevin A. McGrail wrote:
> +1 binding with a caveat:
> 
> You need mentors and champions from Apache who are available and ideally 
> active in the incubator.  Markus had to step down on hivemail last 
> year.  Has his situation changed?
> 
> Some comments:
> The hardware artifacts being donated is interesting and something I 
> would support helping.  We might want to loop in the secretary and legal 
> vps to discuss.
> 
> The reviewer status is something the pmc can elect to do.  They might 
> end up with the same karma as committers on any repos if they need that 
> karma is the only hurdle I can think of.  But we like a model of trust 
> for people so it should be a good thing.
> 
> But otherwise looks like a great start!
> KAM
> On Fri, Feb 15, 2019, 13:42 Markus Weimer <weimer@apache.org 
> <ma...@apache.org> wrote:
> 
>     Hi,
> 
>     we'd like to start the discussion of accepting TVM into the incubator.
>     Please see the proposal below. I'd like to highlight a few things for
>     our discussion:
> 
>     (1) The project already follows many Apache ways like meritocracy,
>     open development and such.
> 
>     (2) The project recognizes an in-between state of "reviewer" that it
>     nominates people for between contributor and committer status. We'd
>     like to learn if and how to maintain that in the future.
> 
>     (3) The project contains hardware as a software artifact. We are not
>     aware of another ASF project like that and wonder if and how it
>     affects its acceptance into the incubator.
> 
>     Thanks!
> 
>     Markus
> 
>     === Proposal ===
> 
>     We propose to incubate the TVM project the Apache Software
>     Foundation. TVM is a
>     full stack open deep learning compiler stack for CPUs, GPUs, and
>     specialized
>     accelerators. It aims to close the gap between the
>     productivity-focused deep
>     learning frameworks, and the performance- or efficiency-oriented
>     hardware
>     backends.
> 
>     === Background ===
> 
>     There is an increasing need to bring machine learning to a wide
>     diversity of
>     hardware devices. Current frameworks rely on vendor-specific
>     operator libraries
>     and optimize for a narrow range of server-class GPUs. Deploying
>     workloads to new
>     platforms -- such as mobile phones, embedded devices, and
>     accelerators (e.g.,
>     FPGAs, ASICs) -- requires significant manual effort. TVM is an end
>     to end deep
>     learning a compiler that exposes graph-level and operator-level
>     optimizations to
>     provide performance portability to deep learning workloads across
>     diverse
>     hardware back-ends. TVM solves optimization challenges specific to deep
>     learning, such as high-level operator fusion, mapping to arbitrary
>     hardware
>     primitives, and memory latency hiding. It also automates optimization of
>     low-level programs to hardware characteristics by employing a novel,
>     learning-based cost modeling method for rapid exploration of program
>     optimizations.
> 
>     Moreover, there is increasing interest in designing specialized
>     hardware which
>     accelerates machine learning. Towards this goal, TVM introduces VTA,
>     an open
>     source deep learning accelerator as part of its stack. The open
>     source VTA
>     driver and hardware design is a crucial step toward building
>     software support
>     for future ASICs. The TVM-VTA flow acts as a is the great frontier for
>     researchers and practitioners to explore specialized hardware designs.
> 
> 
>     === Rationale ===
> 
>     Deep learning compilation will be the next frontier of machine
>     learning systems.
>     TVM is already one of the leading open source projects pursuing this
>     direction.
> 
>     Specifically, TVM provides infrastructure to use machine learning to
>     automatically optimize deployment of deep learning programs on
>     diverse hardware
>     backends.
> 
> 
>     === VTA: Open Source Hardware Design ===
> 
>     TVM also contains open source hardware as part of its stack. The VTA
>     hardware
>     design is a fully open sourced deep learning accelerator that allows
>     us to
>     experiment with compiler, driver, runtime, and execute the code on
>     FPGA. VTA
>     provides a path to target future ASICs, and build software-driven
>     solutions to
>     co-design future deep learning accelerators.
> 
>     Having an open source hardware design in an ASF project is rare and
>     perhaps
>     unprecedented. We put some of our rationale on why it is necessary
>     for the
>     community.
> 
>     Deep learning specialized ASICs are going to be at the center of the AI
>     revolution. However, given its early shape, there is no open
>     standard, or even
>     any available information hardware interface that allows an open
>     source software
>     to target to. VTA provides such open source hardware abstraction
>     layer and
>     allows us to build in abstractions that can be effectively used to
>     target other
>     deep learning accelerators.
> 
>     Moreover, there is an increasing need for co-designing future of machine
>     learning systems with the hardware abstraction. Having a co-designed
>     open source
>     hardware stack along with the software creates a path for this
>     route. In short,
>     we need open-source hardware to build the best open source software.
> 
>     Finally, we can still view VTA design as “software”, as its source
>     code is
>     written in source description language and can generate “binary”
>     which can run
>     on FPGA and possibly simulators.
> 
> 
>     === Current Status ===
> 
>     TVM is open sourced under the Apache License for one and half years.
>     See the
>     current project website (https://tvm.ai/), Github
>     (https://github.com/dmlc/tvm/), as well as TVM Conference
>     (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
> 
>     TVM has already been used in production, some highlights are AWS
>     (Sagemaker
>     Neo), Huawei (AI Chip compilation) and Facebook (mobile
>     optimization). We
>     anticipate the list of adopters to grow over the next few years.
> 
>     === Meritocracy ===
> 
>     The TVM stack began as a research project of the SAMPL group at Paul
>     G. Allen
>     School of Computer Science & Engineering, University of Washington.
>     The project
>     is now driven by an open source community involving multiple
>     industry and
>     academic institutions. The project is currently governed by the
>     Apache Way
>     (https://docs.tvm.ai/contribute/community.html). The project now has 14
>     committers and 6 PMCs, and the list is actively growing. The PMCs
>     uses a google
>     group mail-list to vote in new committers/PMCs, which will be moved
>     to private@
>     after incubation.
> 
>     The community highly values open collaboration among contributors
>     from different
>     backgrounds.The current committers come from UW, Berkeley, Cornell,
>     SJTU, AMD,
>     AWS, Huawei, Google, Facebook, Ziosoft.
> 
> 
>     === Community ===
> 
>     The project currently has 173 contributors. As per the Apache way,
>     all the
>     discussions are conducted in publicly archivable places.
> 
>     - Github issues are used to track development activities and RFC.
>     - The roadmap is public and encourages participation from everyone
>     in the
>     community.
>     - Discussion forums for general discussions. https://discuss.tvm.ai
>     - The content of the discourse forum can be considered as a public
>     archive
>     as it is searchable with all the content
>     - We also created a mail-list archive of the forum, which we will
>     forward to
>     an Apache mail-list after incubation
>     https://groups.google.com/forum/#!forum/tvm-discuss-archive
> 
>     - See https://tvm.ai/community
>     - See https://github.com/dmlc/tvm/releases for past releases.
> 
>     Currently, Github issue serves as dev@ channel. Notably, major
>     features always
>     start from RFCs discussions to encourage broad participation in the
>     community.
> 
>     The community recognizes potential committers early by bringing
>     contributors as
>     code reviewers and encourages them to participate in code reviews.
>     Code reviews
>     and high-quality code are fundamental to the long-term success of
>     the project.
>     The reviewer mechanism in the community serves a way to highlight
>     this aspect as
>     well as helping the community find good candidates to promote to
>     committers.
> 
> 
> 
>     ==== Development and Decision Process ====
> 
>     See
>     https://docs.tvm.ai/contribute/community.html#general-development-process
>     for the current development guideline. The key points are: Open
>     public roadmap
>     during development, which turns into release notes Major features
>     start with an
>     RFC, everything happens in public Encourage public discussion via
>     archivable
>     channels Strive to reach a consensus on technical decisions through
>     discussion
>     Moderation from committers and encourage everyone’s participation
> 
>     Example Roadmap: https://github.com/dmlc/tvm/issues/1170
>     The idea is to keep an active list of roadmaps that can be turned
>     directly
>     into a release note. Public roadmap helps to encourage general
>     participation
>     from all contributors.
> 
>     Example 1:
>     Recently a major proposal in the community is to bring in a new
>     high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673
>     The pull
>     request: https://github.com/dmlc/tvm/pull/1672 Everyone who
>     participated in the
>     RFC is invited to review the code as well - Follow up features are
>     proposed as
>     follow up RFCs.
> 
>     Example 2: Community guideline improvements
>     RFC thread: https://github.com/dmlc/tvm/issues/2017
>     Slack channel setup as per community suggestion, but still encourage the
>     community to only use it for quick communication and use publicly
>     archived
>     channels for development: https://github.com/dmlc/tvm/issues/2174
> 
>     Example 3: Python3 timeline proposal
>     RFC thread: https://github.com/dmlc/tvm/issues/1602
>     Finished with the decision to respect backward compatibility and
>     keep python2
>     support.
> 
>     See
>     https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
>     for a full list of RFCs.
> 
> 
>     === Alignment ===
> 
>     TVM is useful for building deep learning deployment solutions. It is
>     perhaps
>     also the first Apache incubator proposal that includes both open
>     source software
>     and hardware system design.
> 
>     It has the potential to benefit existing related ML projects such as
>     MXNet,
>     Singa, SystemML, and Mahout by providing powerful low-level
>     primitives for
>     matrix operations.
> 
> 
>     === Known Risks ===
> 
>     ==== Orphaned products ====
> 
>     The project has a diverse contributor base. As an example, the current
>     committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
>     Facebook,
>     Ziosoft, Huawei. We are actively growing this list. Given that the
>     project has
>     already been used in production, there is a minimum risk of the
>     project being
>     abandoned.
> 
>     ==== Inexperience with Open Source ====
> 
>     The TVM community has extensive experience in open source. Three of
>     current five
>     PMCs are already PPMCs of existing Apache projects. Over the course of
>     development, the community already has a good way bringing RFCs,
>     discussions and
>     most importantly, welcoming new contributors in the Apache way.
> 
>     ==== Homogenous Developers ====
> 
>     The project has a diverse contributor base. As an example, the current
>     committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS,
>     Huawei, Google,
>     Facebook, Ziosoft. The community actively seeks to collaborative
>     broadly. The
>     PMCs followed a principle to *only* nominate committers outside
>     their own
>     organizations.
> 
> 
>     === Reliance on Salaried Developers ===
> 
>     Most of the current committers are volunteers.
> 
>     === Relationships with Other Apache Products ===
> 
>     TVM can serve as a fundamental compiler stack for deep learning and
>     machine
>     learning in general. We expect it can benefit projects like MXNet,
>     Spark, Flink,
>     Mahout, and SystemML.
> 
>     === Documentation ===
> 
>     See https://tvm.ai/
> 
>     === Initial Source ===
> 
>     https://github.com/dmlc/tvm
> 
>     We plan to move our repository to
>     https://github.com/apache/incubator-tvm
> 
> 
>     === Source and Intellectual Property Submission Plan ===
> 
>     TVM source code is available under Apache V2 license. We will work
>     with the
>     committers to get ICLAs signed.
> 
>     === External Dependencies ===
> 
>     We put all the source level dependencies under
>     https://github.com/dmlc/tvm/tree/master/3rdparty
> 
>     - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
>     - dlpack (Apache2): https://github.com/dmlc/dlpack
>     - HalideIR (MIT): https://github.com/dmlc/HalideIR
>     - range(Unlicense): https://github.com/agauniyal/rang
>     - Compiler-RT (BSD)
>     - LLVM
> 
>     All of the current he dependencies are stable, which means that the
>     current TVM
>     repo is standalone and main development activities only happen at
>     the TVM repo.
>     The dependencies are periodically updated in the rate about once a
>     month when
>     necessary. For source level dependencies, we will always point to a
>     stable
>     release version for software release in the future.
> 
> 
>     === External Dependencies on DMLC projects ===
> 
>     There are three dependencies to dmlc projects in the 3rdparty. The
>     current
>     proposal is to keep the current dependencies in the 3rdparty. We
>     elaborate on
>     the background of these dependencies below:
> 
>     - dmlc-core: is a minimum module for logging and memory
>     serialization. It is
>     currently used by projects including ApacheMXNet, TVM, and XGBoost. The
>     project is relatively stable, with around one change a week(most recent
>     changes comes from XGBoost project). TVM’s dependency on dmlc-core
>     is minimum
>     and only uses its feature for logging.
>     - dlpack: is a minimum consensus standard for in-memory Tensor
>     format. It is
>     currently used by PyTorch, ApacheMXNet, Chainer, and a few other
>     projects.
>     - HalideIR: is a minimum IR data structure that is isolated from a
>     fork of
>     Halide project. We keep the license to be MIT to respect the
>     original license
>     and its origin. A common consensus in the TVM project is that we
>     keep the old
>     derived code in HalideIR (which are stable), and all new
>     developments happen
>     in the TVM repo.
> 
>     The main reason to propose keep these dependencies are:
>     - Each of the dependencies has the user and developer community of
>     its own
>     which is larger than the TVM community or different license
>     options(MIT in
>     HalideIR)
>     - These dependencies are stable and update at a monthly rate.
> 
>     While it is possible to fork the code in the tvm repo, given that
>     the current
>     tvm repo is self-contained, and community development is
>     stand-alone, we feel
>     that there are have enough justifications to treat these as 3rdparty
>     dependencies.
> 
> 
>     === Required Resources ===
> 
>     ==== Mailing List: ====
>     The usual mailing lists are expected to be set up when entering
>     incubation:
> 
>     * private@tvm.apache.org <ma...@tvm.apache.org>
>     * dev@tvm.apache.org <ma...@tvm.apache.org> , subscribe github
>     issues.
>     * discuss-archive@tvm.apache.org
>     <ma...@tvm.apache.org>, Archive the discuss content
>     of the
>     discourse user forum
> 
> 
>     Currently, we only use issues for developments and encourage
>     community to use
>     discuss forums when possible. As a result, the current github issues
>     serves
>     similar purposes as dev@, so we propose to subscribe github issues
>     to dev@ after
>     incubation.
> 
>     The current community use https://discuss.tvm.ai/ for general
>     technical and
>     support discussions. The community forum is maintained by PMCs. We
>     propose to
>     continue to use the forum and archive the posts to an Apache
>     mail-list. We
>     already have the mechanism to do so (see
>     https://groups.google.com/forum/#!forum/tvm-discuss-archive)
> 
> 
> 
>     ==== Git Repositories: ====
> 
>     Upon entering incubation, we plan to transfer the existing repo from
>     https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm.
> 
> 
> 
> 
>     ==== Issue Tracking: ====
> 
>     TVM currently uses GitHub to track issues. We would like to continue
>     to do so
>     while we discuss migration possibilities with the ASF Infra team.
> 
>     ==== URL: ====
> 
>     Current project website: https://tvm.ai/, as we proceed website will
>     migrate to
>     https://tvm.incubator.apache.org and hopefully https://tvm.apache.org
> 
>     === Initial Committers and PMCs ===
> 
>     As the project has already followed the Apache way of development(in
>     terms of
>     meritocracy, community, and archive of public discussion). We plan
>     to transition
>     the current PMCs to PPMCs , and committers to apache committers.
>     There are also
>     ongoing votes and discussions in the current tvm PMC private
>     mail-list about new
>     committers/PMCs(we also invited our tentative mentors as observers
>     to the
>     mail-list). We plan to migrate the discussions to private@ after the
>     proposal
>     has been accepted and bring in the new committers/PPMCs according to the
>     standard Apache community procedure.
> 
> 
>     Initial PPMCs
>     - Tianqi Chen tqchen@apache.org <ma...@apache.org>
>     - Ziheng Jiang ziheng@apache.org <ma...@apache.org>
>     - Yizhi Liu liuyizhi@apache.org <ma...@apache.org>
>     - Thierry Moreau moreau@cs.washington.edu
>     <ma...@cs.washington.edu>
>     - Haichen Shen shenhaichen@gmail.com <ma...@gmail.com>
>     - Lianmin Zheng lianminzheng@gmail.com <ma...@gmail.com>
>     - Markus Weimer weimer@apache.org <ma...@apache.org>
>     - Sebastian Schelter
>     - Byung-Gon Chun
> 
>     Initial Committers (Including PPMCs)
>     - Aditya Atluri Aditya.Atluri@amd.com <ma...@amd.com> AMD
>     - Tianqi Chen tqchen@apache.org <ma...@apache.org>
>     University of Washington
>     - Yuwei Hu huyuwei1995@gmail.com <ma...@gmail.com> Cornell
>     - Nick Hynes nhynes@berkeley.edu <ma...@berkeley.edu> UC
>     Berkeley
>     - Ziheng Jiang ziheng@apache.org <ma...@apache.org>
>     University of Washington
>     - Yizhi Liu liuyizhi@apache.org <ma...@apache.org> AWS
>     - Thierry Moreau moreau@cs.washington.edu
>     <ma...@cs.washington.edu> University of Washington
>     - Siva srk.it38@gmail.com <ma...@gmail.com> Huawei
>     - Haichen Shen shenhaichen@gmail.com <ma...@gmail.com> AWS
>     - Masahiro Masuda masahi129@gmail.com <ma...@gmail.com>
>     Ziosoft
>     - Zhixun Tan phisiart@gmail.com <ma...@gmail.com> Google
>     - Leyuan Wang laurawly@gmail.com <ma...@gmail.com> AWS
>     - Eddie Yan eqy@cs.washington.edu <ma...@cs.washington.edu>
>     University of Washington
>     - Lianming Zheng lianminzheng@gmail.com
>     <ma...@gmail.com> Shanghai Jiao Tong University
> 
> 
>     === Sponsors: ===
> 
>     ==== Champion: ====
>     * Markus Weimer, Microsoft
> 
>     ==== Mentors: ====
>     * Sebastian Schelter, New York University
>     * Byung-Gon Chun, Seoul National University
> 
>     ==== Sponsoring Entity ====
>     We are requesting the Incubator to sponsor this project.
> 
>     ---------------------------------------------------------------------
>     To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
>     <ma...@incubator.apache.org>
>     For additional commands, e-mail: general-help@incubator.apache.org
>     <ma...@incubator.apache.org>
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by "Kevin A. McGrail" <km...@apache.org>.
+1 binding with a caveat:

You need mentors and champions from Apache who are available and ideally
active in the incubator.  Markus had to step down on hivemail last year.
Has his situation changed?

Some comments:
The hardware artifacts being donated is interesting and something I would
support helping.  We might want to loop in the secretary and legal vps to
discuss.

The reviewer status is something the pmc can elect to do.  They might end
up with the same karma as committers on any repos if they need that karma
is the only hurdle I can think of.  But we like a model of trust for people
so it should be a good thing.

But otherwise looks like a great start!
KAM
On Fri, Feb 15, 2019, 13:42 Markus Weimer <weimer@apache.org wrote:

> Hi,
>
> we'd like to start the discussion of accepting TVM into the incubator.
> Please see the proposal below. I'd like to highlight a few things for
> our discussion:
>
> (1) The project already follows many Apache ways like meritocracy,
> open development and such.
>
> (2) The project recognizes an in-between state of "reviewer" that it
> nominates people for between contributor and committer status. We'd
> like to learn if and how to maintain that in the future.
>
> (3) The project contains hardware as a software artifact. We are not
> aware of another ASF project like that and wonder if and how it
> affects its acceptance into the incubator.
>
> Thanks!
>
> Markus
>
> === Proposal ===
>
> We propose to incubate the TVM project the Apache Software Foundation. TVM
> is a
> full stack open deep learning compiler stack for CPUs, GPUs, and
> specialized
> accelerators. It aims to close the gap between the productivity-focused
> deep
> learning frameworks, and the performance- or efficiency-oriented hardware
> backends.
>
> === Background ===
>
> There is an increasing need to bring machine learning to a wide diversity
> of
> hardware devices. Current frameworks rely on vendor-specific operator
> libraries
> and optimize for a narrow range of server-class GPUs. Deploying workloads
> to new
> platforms -- such as mobile phones, embedded devices, and accelerators
> (e.g.,
> FPGAs, ASICs) -- requires significant manual effort. TVM is an end to end
> deep
> learning a compiler that exposes graph-level and operator-level
> optimizations to
> provide performance portability to deep learning workloads across diverse
> hardware back-ends. TVM solves optimization challenges specific to deep
> learning, such as high-level operator fusion, mapping to arbitrary hardware
> primitives, and memory latency hiding. It also automates optimization of
> low-level programs to hardware characteristics by employing a novel,
> learning-based cost modeling method for rapid exploration of program
> optimizations.
>
> Moreover, there is increasing interest in designing specialized hardware
> which
> accelerates machine learning. Towards this goal, TVM introduces VTA, an
> open
> source deep learning accelerator as part of its stack. The open source VTA
> driver and hardware design is a crucial step toward building software
> support
> for future ASICs. The TVM-VTA flow acts as a is the great frontier for
> researchers and practitioners to explore specialized hardware designs.
>
>
> === Rationale ===
>
> Deep learning compilation will be the next frontier of machine learning
> systems.
> TVM is already one of the leading open source projects pursuing this
> direction.
>
> Specifically, TVM provides infrastructure to use machine learning to
> automatically optimize deployment of deep learning programs on diverse
> hardware
> backends.
>
>
> === VTA: Open Source Hardware Design ===
>
> TVM also contains open source hardware as part of its stack. The VTA
> hardware
> design is a fully open sourced deep learning accelerator that allows us to
> experiment with compiler, driver, runtime, and execute the code on FPGA.
> VTA
> provides a path to target future ASICs, and build software-driven
> solutions to
> co-design future deep learning accelerators.
>
> Having an open source hardware design in an ASF project is rare and perhaps
> unprecedented. We put some of our rationale on why it is necessary for the
> community.
>
> Deep learning specialized ASICs are going to be at the center of the AI
> revolution. However, given its early shape, there is no open standard, or
> even
> any available information hardware interface that allows an open source
> software
> to target to. VTA provides such open source hardware abstraction layer and
> allows us to build in abstractions that can be effectively used to target
> other
> deep learning accelerators.
>
> Moreover, there is an increasing need for co-designing future of machine
> learning systems with the hardware abstraction. Having a co-designed open
> source
> hardware stack along with the software creates a path for this route. In
> short,
> we need open-source hardware to build the best open source software.
>
> Finally, we can still view VTA design as “software”, as its source code is
> written in source description language and can generate “binary” which can
> run
> on FPGA and possibly simulators.
>
>
> === Current Status ===
>
> TVM is open sourced under the Apache License for one and half years. See
> the
> current project website (https://tvm.ai/), Github
> (https://github.com/dmlc/tvm/), as well as TVM Conference
> (https://sampl.cs.washington.edu/tvmconf/#about-tvmconf)
>
> TVM has already been used in production, some highlights are AWS (Sagemaker
> Neo), Huawei (AI Chip compilation) and Facebook (mobile optimization). We
> anticipate the list of adopters to grow over the next few years.
>
> === Meritocracy ===
>
> The TVM stack began as a research project of the SAMPL group at Paul G.
> Allen
> School of Computer Science & Engineering, University of Washington. The
> project
> is now driven by an open source community involving multiple industry and
> academic institutions. The project is currently governed by the Apache Way
> (https://docs.tvm.ai/contribute/community.html). The project now has 14
> committers and 6 PMCs, and the list is actively growing. The PMCs uses a
> google
> group mail-list to vote in new committers/PMCs, which will be moved to
> private@
> after incubation.
>
> The community highly values open collaboration among contributors from
> different
> backgrounds.The current committers come from UW, Berkeley, Cornell, SJTU,
> AMD,
> AWS, Huawei, Google, Facebook, Ziosoft.
>
>
> === Community ===
>
> The project currently has 173 contributors. As per the Apache way, all the
> discussions are conducted in publicly archivable places.
>
> - Github issues are used to track development activities and RFC.
> - The roadmap is public and encourages participation from everyone in the
> community.
> - Discussion forums for general discussions. https://discuss.tvm.ai
> - The content of the discourse forum can be considered as a public archive
> as it is searchable with all the content
> - We also created a mail-list archive of the forum, which we will forward
> to
> an Apache mail-list after incubation
> https://groups.google.com/forum/#!forum/tvm-discuss-archive
>
> - See https://tvm.ai/community
> - See https://github.com/dmlc/tvm/releases for past releases.
>
> Currently, Github issue serves as dev@ channel. Notably, major features
> always
> start from RFCs discussions to encourage broad participation in the
> community.
>
> The community recognizes potential committers early by bringing
> contributors as
> code reviewers and encourages them to participate in code reviews. Code
> reviews
> and high-quality code are fundamental to the long-term success of the
> project.
> The reviewer mechanism in the community serves a way to highlight this
> aspect as
> well as helping the community find good candidates to promote to
> committers.
>
>
>
> ==== Development and Decision Process ====
>
> See
> https://docs.tvm.ai/contribute/community.html#general-development-process
> for the current development guideline. The key points are: Open public
> roadmap
> during development, which turns into release notes Major features start
> with an
> RFC, everything happens in public Encourage public discussion via
> archivable
> channels Strive to reach a consensus on technical decisions through
> discussion
> Moderation from committers and encourage everyone’s participation
>
> Example Roadmap: https://github.com/dmlc/tvm/issues/1170
> The idea is to keep an active list of roadmaps that can be turned directly
> into a release note. Public roadmap helps to encourage general
> participation
> from all contributors.
>
> Example 1:
> Recently a major proposal in the community is to bring in a new
> high-level IR, RFC thread: https://github.com/dmlc/tvm/issues/1673 The
> pull
> request: https://github.com/dmlc/tvm/pull/1672 Everyone who participated
> in the
> RFC is invited to review the code as well - Follow up features are
> proposed as
> follow up RFCs.
>
> Example 2: Community guideline improvements
> RFC thread: https://github.com/dmlc/tvm/issues/2017
> Slack channel setup as per community suggestion, but still encourage the
> community to only use it for quick communication and use publicly archived
> channels for development: https://github.com/dmlc/tvm/issues/2174
>
> Example 3: Python3 timeline proposal
> RFC thread: https://github.com/dmlc/tvm/issues/1602
> Finished with the decision to respect backward compatibility and keep
> python2
> support.
>
> See
>
> https://github.com/dmlc/tvm/issues?utf8=%E2%9C%93&q=label%3A%22status%3A+RFC%22+
> for a full list of RFCs.
>
>
> === Alignment ===
>
> TVM is useful for building deep learning deployment solutions. It is
> perhaps
> also the first Apache incubator proposal that includes both open source
> software
> and hardware system design.
>
> It has the potential to benefit existing related ML projects such as MXNet,
> Singa, SystemML, and Mahout by providing powerful low-level primitives for
> matrix operations.
>
>
> === Known Risks ===
>
> ==== Orphaned products ====
>
> The project has a diverse contributor base. As an example, the current
> committers come from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Google,
> Facebook,
> Ziosoft, Huawei. We are actively growing this list. Given that the project
> has
> already been used in production, there is a minimum risk of the project
> being
> abandoned.
>
> ==== Inexperience with Open Source ====
>
> The TVM community has extensive experience in open source. Three of
> current five
> PMCs are already PPMCs of existing Apache projects. Over the course of
> development, the community already has a good way bringing RFCs,
> discussions and
> most importantly, welcoming new contributors in the Apache way.
>
> ==== Homogenous Developers ====
>
> The project has a diverse contributor base. As an example, the current
> committers comes from: UW, Berkeley, Cornell, SJTU, AMD, AWS, Huawei,
> Google,
> Facebook, Ziosoft. The community actively seeks to collaborative broadly.
> The
> PMCs followed a principle to *only* nominate committers outside their own
> organizations.
>
>
> === Reliance on Salaried Developers ===
>
> Most of the current committers are volunteers.
>
> === Relationships with Other Apache Products ===
>
> TVM can serve as a fundamental compiler stack for deep learning and machine
> learning in general. We expect it can benefit projects like MXNet, Spark,
> Flink,
> Mahout, and SystemML.
>
> === Documentation ===
>
> See https://tvm.ai/
>
> === Initial Source ===
>
> https://github.com/dmlc/tvm
>
> We plan to move our repository to https://github.com/apache/incubator-tvm
>
>
> === Source and Intellectual Property Submission Plan ===
>
> TVM source code is available under Apache V2 license. We will work with the
> committers to get ICLAs signed.
>
> === External Dependencies ===
>
> We put all the source level dependencies under
> https://github.com/dmlc/tvm/tree/master/3rdparty
>
> - dmlc-core (Apache2): https://github.com/dmlc/dmlc-core
> - dlpack (Apache2): https://github.com/dmlc/dlpack
> - HalideIR (MIT): https://github.com/dmlc/HalideIR
> - range(Unlicense): https://github.com/agauniyal/rang
> - Compiler-RT (BSD)
> - LLVM
>
> All of the current he dependencies are stable, which means that the
> current TVM
> repo is standalone and main development activities only happen at the TVM
> repo.
> The dependencies are periodically updated in the rate about once a month
> when
> necessary. For source level dependencies, we will always point to a stable
> release version for software release in the future.
>
>
> === External Dependencies on DMLC projects ===
>
> There are three dependencies to dmlc projects in the 3rdparty. The current
> proposal is to keep the current dependencies in the 3rdparty. We elaborate
> on
> the background of these dependencies below:
>
> - dmlc-core: is a minimum module for logging and memory serialization. It
> is
> currently used by projects including ApacheMXNet, TVM, and XGBoost. The
> project is relatively stable, with around one change a week(most recent
> changes comes from XGBoost project). TVM’s dependency on dmlc-core is
> minimum
> and only uses its feature for logging.
> - dlpack: is a minimum consensus standard for in-memory Tensor format. It
> is
> currently used by PyTorch, ApacheMXNet, Chainer, and a few other projects.
> - HalideIR: is a minimum IR data structure that is isolated from a fork of
> Halide project. We keep the license to be MIT to respect the original
> license
> and its origin. A common consensus in the TVM project is that we keep the
> old
> derived code in HalideIR (which are stable), and all new developments
> happen
> in the TVM repo.
>
> The main reason to propose keep these dependencies are:
> - Each of the dependencies has the user and developer community of its own
> which is larger than the TVM community or different license options(MIT in
> HalideIR)
> - These dependencies are stable and update at a monthly rate.
>
> While it is possible to fork the code in the tvm repo, given that the
> current
> tvm repo is self-contained, and community development is stand-alone, we
> feel
> that there are have enough justifications to treat these as 3rdparty
> dependencies.
>
>
> === Required Resources ===
>
> ==== Mailing List: ====
> The usual mailing lists are expected to be set up when entering incubation:
>
> * private@tvm.apache.org
> * dev@tvm.apache.org , subscribe github issues.
> * discuss-archive@tvm.apache.org, Archive the discuss content of the
> discourse user forum
>
>
> Currently, we only use issues for developments and encourage community to
> use
> discuss forums when possible. As a result, the current github issues serves
> similar purposes as dev@, so we propose to subscribe github issues to dev@
> after
> incubation.
>
> The current community use https://discuss.tvm.ai/ for general technical
> and
> support discussions. The community forum is maintained by PMCs. We propose
> to
> continue to use the forum and archive the posts to an Apache mail-list. We
> already have the mechanism to do so (see
> https://groups.google.com/forum/#!forum/tvm-discuss-archive)
>
>
>
> ==== Git Repositories: ====
>
> Upon entering incubation, we plan to transfer the existing repo from
> https://github.com/dmlc/tvm to https://github.com/apache/incubator-tvm.
>
>
>
>
> ==== Issue Tracking: ====
>
> TVM currently uses GitHub to track issues. We would like to continue to do
> so
> while we discuss migration possibilities with the ASF Infra team.
>
> ==== URL: ====
>
> Current project website: https://tvm.ai/, as we proceed website will
> migrate to
> https://tvm.incubator.apache.org and hopefully https://tvm.apache.org
>
> === Initial Committers and PMCs ===
>
> As the project has already followed the Apache way of development(in terms
> of
> meritocracy, community, and archive of public discussion). We plan to
> transition
> the current PMCs to PPMCs , and committers to apache committers. There are
> also
> ongoing votes and discussions in the current tvm PMC private mail-list
> about new
> committers/PMCs(we also invited our tentative mentors as observers to the
> mail-list). We plan to migrate the discussions to private@ after the
> proposal
> has been accepted and bring in the new committers/PPMCs according to the
> standard Apache community procedure.
>
>
> Initial PPMCs
> - Tianqi Chen tqchen@apache.org
> - Ziheng Jiang ziheng@apache.org
> - Yizhi Liu liuyizhi@apache.org
> - Thierry Moreau moreau@cs.washington.edu
> - Haichen Shen shenhaichen@gmail.com
> - Lianmin Zheng lianminzheng@gmail.com
> - Markus Weimer weimer@apache.org
> - Sebastian Schelter
> - Byung-Gon Chun
>
> Initial Committers (Including PPMCs)
> - Aditya Atluri Aditya.Atluri@amd.com AMD
> - Tianqi Chen tqchen@apache.org University of Washington
> - Yuwei Hu huyuwei1995@gmail.com Cornell
> - Nick Hynes nhynes@berkeley.edu UC Berkeley
> - Ziheng Jiang ziheng@apache.org University of Washington
> - Yizhi Liu liuyizhi@apache.org AWS
> - Thierry Moreau moreau@cs.washington.edu University of Washington
> - Siva srk.it38@gmail.com Huawei
> - Haichen Shen shenhaichen@gmail.com AWS
> - Masahiro Masuda masahi129@gmail.com Ziosoft
> - Zhixun Tan phisiart@gmail.com Google
> - Leyuan Wang laurawly@gmail.com AWS
> - Eddie Yan eqy@cs.washington.edu University of Washington
> - Lianming Zheng lianminzheng@gmail.com Shanghai Jiao Tong University
>
>
> === Sponsors: ===
>
> ==== Champion: ====
> * Markus Weimer, Microsoft
>
> ==== Mentors: ====
> * Sebastian Schelter, New York University
> * Byung-Gon Chun, Seoul National University
>
> ==== Sponsoring Entity ====
> We are requesting the Incubator to sponsor this project.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> For additional commands, e-mail: general-help@incubator.apache.org
>
>

Re: [Proposal] Apache TVM

Posted by Furkan KAMACI <fu...@gmail.com>.
Thanks Markus! Will be ready for the help!

27 Şub 2019 Çar, saat 20:32 tarihinde Henry Saputra <he...@gmail.com>
şunu yazdı:

> Thanks, Marcus. Looking forward for the VOTE thread.
>
> This would be great addition to Apache Software Foundation.
>
> - Henry
>
> On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer <we...@apache.org> wrote:
>
> > Thanks everyone for the discussion thus far. Based on it, I have uploaded
> > an updated proposal here:
> >
> > https://wiki.apache.org/incubator/TVMProposal
> >
> > The changes made are:
> >
> >    1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
> >    pointing that out!
> >    2. Adding Furkan, Timothy and Henry as additional mentors. We can use
> >    all the help :)
> >
> > Assuming there are no further discussion points, I'd like to move forward
> > with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> > sure we are done with the discussion phase.
> >
> > Thanks,
> >
> > Markus
> >
> >
> > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen <tq...@apache.org> wrote:
> >
> > > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > > points being raised so far on behalf of the current TVM PMC.
> > >
> > > > PMC vs PMC member
> > >
> > > Thanks for pointing it out. This is something we overlooked and will
> > update
> > > the proposal to make the change accordingly.
> > >
> > > > Champion
> > >
> > > Markus has been actively engaging with the TVM community and helped the
> > > community start the incubation process. These efforts include:
> > > - Introduce the Apache way to in the TVM conference last Dec
> > >    -
> > >
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > > - Help the community to start the incubation conversation(also Thanks
> to
> > > Sebastian and Gon)
> > >    - https://github.com/dmlc/tvm/issues/2401
> > > - Watch the pre-incubation private list, and give helpful feedback
> > >
> > > While we do not expect our mentor to actively watch the community on
> the
> > > daily basis(many of our committers only contribute a few days in a
> week),
> > > he has been very responsive and helped us to shape the incubation
> > proposal
> > > and most importantly be a strong advocate of the Apache way. I
> personally
> > > think he is more than qualified as our champion:)
> > >
> > > > Hardware artifact
> > >
> > > INAL, however, given that Apache only releases source code and our
> source
> > > code is in the form of software source code (HLS C and we are moving to
> > > Chisel-(scala) ). Then anyone can take the software source code and
> > > generate unofficial hardware release.
> > >
> > > Tianqi
> > >
> > >
> > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > > bdelacretaz@codeconsult.ch> wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> > justin@classsoftware.com
> > > >
> > > > wrote:
> > > > > > If the Apache License works for those artifacts I think that's
> > > fine...
> > > > >
> > > > > It probably doesn’t, but it's complex and INAL, but I have touched
> on
> > > > this about this in IoT talks at previous ApacheCons...
> > > >
> > > > FWIW the prior discussions that I mentioned are linked below - from
> > > > board@ so accessible for ASF Members of Officers only, but we can
> > > > distill them as needed if a concrete need appears with TVM.
> > > >
> > > > We didn't go past the discussions stage at that time (2011) but if
> > > > there's another case of hardware at the ASF I'm willing to help
> > > > restart those discussions to move this forward. Either to define
> which
> > > > additions to the Apache License are required, or to clarify that it's
> > > > ok as is.
> > > >
> > > > So unless there are specific objections about accepting a project
> > > > which includes hardware as a software artifact I'm in favor of
> > > > accepting TVM and sorting out these things during incubation.
> > > >
> > > > -Bertrand
> > > >
> > > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > > https://s.apache.org/hw2011_2
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail: general-help@incubator.apache.org
> > > >
> > > >
> > >
> >
>

Re: [Proposal] Apache TVM

Posted by Henry Saputra <he...@gmail.com>.
Thanks, Marcus. Looking forward for the VOTE thread.

This would be great addition to Apache Software Foundation.

- Henry

On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer <we...@apache.org> wrote:

> Thanks everyone for the discussion thus far. Based on it, I have uploaded
> an updated proposal here:
>
> https://wiki.apache.org/incubator/TVMProposal
>
> The changes made are:
>
>    1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
>    pointing that out!
>    2. Adding Furkan, Timothy and Henry as additional mentors. We can use
>    all the help :)
>
> Assuming there are no further discussion points, I'd like to move forward
> with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> sure we are done with the discussion phase.
>
> Thanks,
>
> Markus
>
>
> On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen <tq...@apache.org> wrote:
>
> > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > points being raised so far on behalf of the current TVM PMC.
> >
> > > PMC vs PMC member
> >
> > Thanks for pointing it out. This is something we overlooked and will
> update
> > the proposal to make the change accordingly.
> >
> > > Champion
> >
> > Markus has been actively engaging with the TVM community and helped the
> > community start the incubation process. These efforts include:
> > - Introduce the Apache way to in the TVM conference last Dec
> >    -
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > - Help the community to start the incubation conversation(also Thanks to
> > Sebastian and Gon)
> >    - https://github.com/dmlc/tvm/issues/2401
> > - Watch the pre-incubation private list, and give helpful feedback
> >
> > While we do not expect our mentor to actively watch the community on the
> > daily basis(many of our committers only contribute a few days in a week),
> > he has been very responsive and helped us to shape the incubation
> proposal
> > and most importantly be a strong advocate of the Apache way. I personally
> > think he is more than qualified as our champion:)
> >
> > > Hardware artifact
> >
> > INAL, however, given that Apache only releases source code and our source
> > code is in the form of software source code (HLS C and we are moving to
> > Chisel-(scala) ). Then anyone can take the software source code and
> > generate unofficial hardware release.
> >
> > Tianqi
> >
> >
> > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > bdelacretaz@codeconsult.ch> wrote:
> >
> > > Hi,
> > >
> > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> justin@classsoftware.com
> > >
> > > wrote:
> > > > > If the Apache License works for those artifacts I think that's
> > fine...
> > > >
> > > > It probably doesn’t, but it's complex and INAL, but I have touched on
> > > this about this in IoT talks at previous ApacheCons...
> > >
> > > FWIW the prior discussions that I mentioned are linked below - from
> > > board@ so accessible for ASF Members of Officers only, but we can
> > > distill them as needed if a concrete need appears with TVM.
> > >
> > > We didn't go past the discussions stage at that time (2011) but if
> > > there's another case of hardware at the ASF I'm willing to help
> > > restart those discussions to move this forward. Either to define which
> > > additions to the Apache License are required, or to clarify that it's
> > > ok as is.
> > >
> > > So unless there are specific objections about accepting a project
> > > which includes hardware as a software artifact I'm in favor of
> > > accepting TVM and sorting out these things during incubation.
> > >
> > > -Bertrand
> > >
> > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > https://s.apache.org/hw2011_2
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: general-help@incubator.apache.org
> > >
> > >
> >
>

Re: [Proposal] Apache TVM

Posted by Tianqi Chen <tq...@cs.washington.edu>.
Thanks Henry!

On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra <he...@gmail.com>
wrote:

> Thanks, Markus.
>
> Hope you do not mind but I have edited the proposal to reflect the changes.
> Since the people did not actually change, I think we can continue with the
> VOTE
>
>
> - Henry
>
> On Thu, Feb 28, 2019 at 10:20 AM Markus Weimer <we...@apache.org> wrote:
>
> > On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra <he...@gmail.com>
> > wrote:
> >
> > > > What I can do instead is to restructure the proposal to have PPMC to
> > > > include mentors and the PMC members from TVM.
> > > > And the rest of committers from TVM will invited from VOTE from PPMC.
> > >
> >
> > Yes, that is what I should have done in the final edits of the Proposal,
> > but did not do. This is how all other incubator projects I've been in
> have
> > done it: PPMC is mentors + leaders / founders / members of the inbound
> > project. For TVM, the most appropriate thing is to have the PPMC be
> mentors
> > + TVM's current PMC.
> >
> > If we agree on that, I'd like to make the change in the proposal, and
> leave
> > the vote open.
> >
> > Thanks for spotting this, Henry!
> >
> > Markus
> >
>

Re: [Proposal] Apache TVM

Posted by Markus Weimer <we...@apache.org>.
On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra <he...@gmail.com> wrote:
> Hope you do not mind but I have edited the proposal to reflect the changes.

Thanks!

Markus

On Thu, Feb 28, 2019 at 3:44 PM Markus Weimer <ma...@weimo.de> wrote:
>
> On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra <he...@gmail.com> wrote:
> > Hope you do not mind but I have edited the proposal to reflect the changes.
>
> Thanks!
>
> Markus

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by Markus Weimer <ma...@weimo.de>.
On Thu, Feb 28, 2019 at 10:57 AM Henry Saputra <he...@gmail.com> wrote:
> Hope you do not mind but I have edited the proposal to reflect the changes.

Thanks!

Markus

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by Henry Saputra <he...@gmail.com>.
Thanks, Markus.

Hope you do not mind but I have edited the proposal to reflect the changes.
Since the people did not actually change, I think we can continue with the
VOTE


- Henry

On Thu, Feb 28, 2019 at 10:20 AM Markus Weimer <we...@apache.org> wrote:

> On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra <he...@gmail.com>
> wrote:
>
> > > What I can do instead is to restructure the proposal to have PPMC to
> > > include mentors and the PMC members from TVM.
> > > And the rest of committers from TVM will invited from VOTE from PPMC.
> >
>
> Yes, that is what I should have done in the final edits of the Proposal,
> but did not do. This is how all other incubator projects I've been in have
> done it: PPMC is mentors + leaders / founders / members of the inbound
> project. For TVM, the most appropriate thing is to have the PPMC be mentors
> + TVM's current PMC.
>
> If we agree on that, I'd like to make the change in the proposal, and leave
> the vote open.
>
> Thanks for spotting this, Henry!
>
> Markus
>

Re: [Proposal] Apache TVM

Posted by Markus Weimer <we...@apache.org>.
On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra <he...@gmail.com>
wrote:

> > What I can do instead is to restructure the proposal to have PPMC to
> > include mentors and the PMC members from TVM.
> > And the rest of committers from TVM will invited from VOTE from PPMC.
>

Yes, that is what I should have done in the final edits of the Proposal,
but did not do. This is how all other incubator projects I've been in have
done it: PPMC is mentors + leaders / founders / members of the inbound
project. For TVM, the most appropriate thing is to have the PPMC be mentors
+ TVM's current PMC.

If we agree on that, I'd like to make the change in the proposal, and leave
the vote open.

Thanks for spotting this, Henry!

Markus

Re: [Proposal] Apache TVM

Posted by Henry Saputra <he...@gmail.com>.
Hi TIanqi,

Actually for the initial committers, I believe can onboard them as part of
bootstrapping of project.

Any member of IPMC could keep me honest here too =)

Reference for Incubator PPMC for info:
https://incubator.apache.org/guides/ppmc.html

Thanks,

- Henry

On Thu, Feb 28, 2019 at 9:36 AM Henry Saputra <he...@gmail.com>
wrote:

> HI Tianqi,
>
> What I can do instead is to restructure the proposal to have PPMC to
> include mentors and the PMC members from TVM.
> And the rest of committers from TVM will invited from VOTE from PPMC.
>
> Would that work?
>
> - Henry
>
> On Thu, Feb 28, 2019 at 2:13 AM Tianqi Chen <tq...@cs.washington.edu>
> wrote:
>
>> Hi Henry:
>>
>> Because the TVM community already adopts Apache meritocracy and has a
>> separation of PMC and committers. Every new member(PMC and committers) are
>> formally discussed and we welcome each member in the community by
>> summarizing their contributions.
>> If possible,  we would like to keep the same structure during incubation.
>> The current PMC members are actively proposing new committers and PMC
>> members from different organizations in the past few months and will
>> continue doing so after the incubation.
>>
>> Tianqi
>>
>> On Wed, Feb 27, 2019 at 9:07 PM Henry Saputra <he...@gmail.com>
>> wrote:
>>
>> > Bit more clarifications, as new podling in Apache, the initial members
>> of
>> > PPMC consist of mentors and initial commiters of the project.
>> >
>> > I understand TVM already work mirroring ASF meritoracy [1] but we need
>> to
>> > change the proposal to follow Apache guidelines to help us cross check
>> > membership later for onboarding.
>> >
>> > If it is OK with you I will change the proposal to merge the "Initial
>> PPMC
>> > Members" and "Initial Committers", minus the mentors from ASF, to be
>> just
>> > Initial Committers.
>> >
>> > Thanks,
>> >
>> > - Henry
>> >
>> >
>> > [1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md
>> >
>> > On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer <we...@apache.org>
>> wrote:
>> >
>> > > Thanks everyone for the discussion thus far. Based on it, I have
>> uploaded
>> > > an updated proposal here:
>> > >
>> > > https://wiki.apache.org/incubator/TVMProposal
>> > >
>> > > The changes made are:
>> > >
>> > >    1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
>> > >    pointing that out!
>> > >    2. Adding Furkan, Timothy and Henry as additional mentors. We can
>> use
>> > >    all the help :)
>> > >
>> > > Assuming there are no further discussion points, I'd like to move
>> forward
>> > > with a [VOTE]. I'll let this sit here and simmer for another 24h to
>> make
>> > > sure we are done with the discussion phase.
>> > >
>> > > Thanks,
>> > >
>> > > Markus
>> > >
>> > >
>> > > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen <tq...@apache.org>
>> wrote:
>> > >
>> > > > Thanks, everyone for helpful feedbacks. I would like to clarify a
>> few
>> > > > points being raised so far on behalf of the current TVM PMC.
>> > > >
>> > > > > PMC vs PMC member
>> > > >
>> > > > Thanks for pointing it out. This is something we overlooked and will
>> > > update
>> > > > the proposal to make the change accordingly.
>> > > >
>> > > > > Champion
>> > > >
>> > > > Markus has been actively engaging with the TVM community and helped
>> the
>> > > > community start the incubation process. These efforts include:
>> > > > - Introduce the Apache way to in the TVM conference last Dec
>> > > >    -
>> > > >
>> > >
>> >
>> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
>> > > > - Help the community to start the incubation conversation(also
>> Thanks
>> > to
>> > > > Sebastian and Gon)
>> > > >    - https://github.com/dmlc/tvm/issues/2401
>> > > > - Watch the pre-incubation private list, and give helpful feedback
>> > > >
>> > > > While we do not expect our mentor to actively watch the community on
>> > the
>> > > > daily basis(many of our committers only contribute a few days in a
>> > week),
>> > > > he has been very responsive and helped us to shape the incubation
>> > > proposal
>> > > > and most importantly be a strong advocate of the Apache way. I
>> > personally
>> > > > think he is more than qualified as our champion:)
>> > > >
>> > > > > Hardware artifact
>> > > >
>> > > > INAL, however, given that Apache only releases source code and our
>> > source
>> > > > code is in the form of software source code (HLS C and we are
>> moving to
>> > > > Chisel-(scala) ). Then anyone can take the software source code and
>> > > > generate unofficial hardware release.
>> > > >
>> > > > Tianqi
>> > > >
>> > > >
>> > > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
>> > > > bdelacretaz@codeconsult.ch> wrote:
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
>> > > justin@classsoftware.com
>> > > > >
>> > > > > wrote:
>> > > > > > > If the Apache License works for those artifacts I think that's
>> > > > fine...
>> > > > > >
>> > > > > > It probably doesn’t, but it's complex and INAL, but I have
>> touched
>> > on
>> > > > > this about this in IoT talks at previous ApacheCons...
>> > > > >
>> > > > > FWIW the prior discussions that I mentioned are linked below -
>> from
>> > > > > board@ so accessible for ASF Members of Officers only, but we can
>> > > > > distill them as needed if a concrete need appears with TVM.
>> > > > >
>> > > > > We didn't go past the discussions stage at that time (2011) but if
>> > > > > there's another case of hardware at the ASF I'm willing to help
>> > > > > restart those discussions to move this forward. Either to define
>> > which
>> > > > > additions to the Apache License are required, or to clarify that
>> it's
>> > > > > ok as is.
>> > > > >
>> > > > > So unless there are specific objections about accepting a project
>> > > > > which includes hardware as a software artifact I'm in favor of
>> > > > > accepting TVM and sorting out these things during incubation.
>> > > > >
>> > > > > -Bertrand
>> > > > >
>> > > > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
>> > > > > https://s.apache.org/hw2011_2
>> > > > >
>> > > > >
>> ---------------------------------------------------------------------
>> > > > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
>> > > > > For additional commands, e-mail:
>> general-help@incubator.apache.org
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>

Re: [Proposal] Apache TVM

Posted by Henry Saputra <he...@gmail.com>.
HI Tianqi,

What I can do instead is to restructure the proposal to have PPMC to
include mentors and the PMC members from TVM.
And the rest of committers from TVM will invited from VOTE from PPMC.

Would that work?

- Henry

On Thu, Feb 28, 2019 at 2:13 AM Tianqi Chen <tq...@cs.washington.edu>
wrote:

> Hi Henry:
>
> Because the TVM community already adopts Apache meritocracy and has a
> separation of PMC and committers. Every new member(PMC and committers) are
> formally discussed and we welcome each member in the community by
> summarizing their contributions.
> If possible,  we would like to keep the same structure during incubation.
> The current PMC members are actively proposing new committers and PMC
> members from different organizations in the past few months and will
> continue doing so after the incubation.
>
> Tianqi
>
> On Wed, Feb 27, 2019 at 9:07 PM Henry Saputra <he...@gmail.com>
> wrote:
>
> > Bit more clarifications, as new podling in Apache, the initial members of
> > PPMC consist of mentors and initial commiters of the project.
> >
> > I understand TVM already work mirroring ASF meritoracy [1] but we need to
> > change the proposal to follow Apache guidelines to help us cross check
> > membership later for onboarding.
> >
> > If it is OK with you I will change the proposal to merge the "Initial
> PPMC
> > Members" and "Initial Committers", minus the mentors from ASF, to be just
> > Initial Committers.
> >
> > Thanks,
> >
> > - Henry
> >
> >
> > [1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md
> >
> > On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer <we...@apache.org> wrote:
> >
> > > Thanks everyone for the discussion thus far. Based on it, I have
> uploaded
> > > an updated proposal here:
> > >
> > > https://wiki.apache.org/incubator/TVMProposal
> > >
> > > The changes made are:
> > >
> > >    1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
> > >    pointing that out!
> > >    2. Adding Furkan, Timothy and Henry as additional mentors. We can
> use
> > >    all the help :)
> > >
> > > Assuming there are no further discussion points, I'd like to move
> forward
> > > with a [VOTE]. I'll let this sit here and simmer for another 24h to
> make
> > > sure we are done with the discussion phase.
> > >
> > > Thanks,
> > >
> > > Markus
> > >
> > >
> > > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen <tq...@apache.org> wrote:
> > >
> > > > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > > > points being raised so far on behalf of the current TVM PMC.
> > > >
> > > > > PMC vs PMC member
> > > >
> > > > Thanks for pointing it out. This is something we overlooked and will
> > > update
> > > > the proposal to make the change accordingly.
> > > >
> > > > > Champion
> > > >
> > > > Markus has been actively engaging with the TVM community and helped
> the
> > > > community start the incubation process. These efforts include:
> > > > - Introduce the Apache way to in the TVM conference last Dec
> > > >    -
> > > >
> > >
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > > > - Help the community to start the incubation conversation(also Thanks
> > to
> > > > Sebastian and Gon)
> > > >    - https://github.com/dmlc/tvm/issues/2401
> > > > - Watch the pre-incubation private list, and give helpful feedback
> > > >
> > > > While we do not expect our mentor to actively watch the community on
> > the
> > > > daily basis(many of our committers only contribute a few days in a
> > week),
> > > > he has been very responsive and helped us to shape the incubation
> > > proposal
> > > > and most importantly be a strong advocate of the Apache way. I
> > personally
> > > > think he is more than qualified as our champion:)
> > > >
> > > > > Hardware artifact
> > > >
> > > > INAL, however, given that Apache only releases source code and our
> > source
> > > > code is in the form of software source code (HLS C and we are moving
> to
> > > > Chisel-(scala) ). Then anyone can take the software source code and
> > > > generate unofficial hardware release.
> > > >
> > > > Tianqi
> > > >
> > > >
> > > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > > > bdelacretaz@codeconsult.ch> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> > > justin@classsoftware.com
> > > > >
> > > > > wrote:
> > > > > > > If the Apache License works for those artifacts I think that's
> > > > fine...
> > > > > >
> > > > > > It probably doesn’t, but it's complex and INAL, but I have
> touched
> > on
> > > > > this about this in IoT talks at previous ApacheCons...
> > > > >
> > > > > FWIW the prior discussions that I mentioned are linked below - from
> > > > > board@ so accessible for ASF Members of Officers only, but we can
> > > > > distill them as needed if a concrete need appears with TVM.
> > > > >
> > > > > We didn't go past the discussions stage at that time (2011) but if
> > > > > there's another case of hardware at the ASF I'm willing to help
> > > > > restart those discussions to move this forward. Either to define
> > which
> > > > > additions to the Apache License are required, or to clarify that
> it's
> > > > > ok as is.
> > > > >
> > > > > So unless there are specific objections about accepting a project
> > > > > which includes hardware as a software artifact I'm in favor of
> > > > > accepting TVM and sorting out these things during incubation.
> > > > >
> > > > > -Bertrand
> > > > >
> > > > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > > > https://s.apache.org/hw2011_2
> > > > >
> > > > >
> ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > > > > For additional commands, e-mail: general-help@incubator.apache.org
> > > > >
> > > > >
> > > >
> > >
> >
>

Re: [Proposal] Apache TVM

Posted by Tianqi Chen <tq...@cs.washington.edu>.
Hi Henry:

Because the TVM community already adopts Apache meritocracy and has a
separation of PMC and committers. Every new member(PMC and committers) are
formally discussed and we welcome each member in the community by
summarizing their contributions.
If possible,  we would like to keep the same structure during incubation.
The current PMC members are actively proposing new committers and PMC
members from different organizations in the past few months and will
continue doing so after the incubation.

Tianqi

On Wed, Feb 27, 2019 at 9:07 PM Henry Saputra <he...@gmail.com>
wrote:

> Bit more clarifications, as new podling in Apache, the initial members of
> PPMC consist of mentors and initial commiters of the project.
>
> I understand TVM already work mirroring ASF meritoracy [1] but we need to
> change the proposal to follow Apache guidelines to help us cross check
> membership later for onboarding.
>
> If it is OK with you I will change the proposal to merge the "Initial PPMC
> Members" and "Initial Committers", minus the mentors from ASF, to be just
> Initial Committers.
>
> Thanks,
>
> - Henry
>
>
> [1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md
>
> On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer <we...@apache.org> wrote:
>
> > Thanks everyone for the discussion thus far. Based on it, I have uploaded
> > an updated proposal here:
> >
> > https://wiki.apache.org/incubator/TVMProposal
> >
> > The changes made are:
> >
> >    1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
> >    pointing that out!
> >    2. Adding Furkan, Timothy and Henry as additional mentors. We can use
> >    all the help :)
> >
> > Assuming there are no further discussion points, I'd like to move forward
> > with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> > sure we are done with the discussion phase.
> >
> > Thanks,
> >
> > Markus
> >
> >
> > On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen <tq...@apache.org> wrote:
> >
> > > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > > points being raised so far on behalf of the current TVM PMC.
> > >
> > > > PMC vs PMC member
> > >
> > > Thanks for pointing it out. This is something we overlooked and will
> > update
> > > the proposal to make the change accordingly.
> > >
> > > > Champion
> > >
> > > Markus has been actively engaging with the TVM community and helped the
> > > community start the incubation process. These efforts include:
> > > - Introduce the Apache way to in the TVM conference last Dec
> > >    -
> > >
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > > - Help the community to start the incubation conversation(also Thanks
> to
> > > Sebastian and Gon)
> > >    - https://github.com/dmlc/tvm/issues/2401
> > > - Watch the pre-incubation private list, and give helpful feedback
> > >
> > > While we do not expect our mentor to actively watch the community on
> the
> > > daily basis(many of our committers only contribute a few days in a
> week),
> > > he has been very responsive and helped us to shape the incubation
> > proposal
> > > and most importantly be a strong advocate of the Apache way. I
> personally
> > > think he is more than qualified as our champion:)
> > >
> > > > Hardware artifact
> > >
> > > INAL, however, given that Apache only releases source code and our
> source
> > > code is in the form of software source code (HLS C and we are moving to
> > > Chisel-(scala) ). Then anyone can take the software source code and
> > > generate unofficial hardware release.
> > >
> > > Tianqi
> > >
> > >
> > > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > > bdelacretaz@codeconsult.ch> wrote:
> > >
> > > > Hi,
> > > >
> > > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> > justin@classsoftware.com
> > > >
> > > > wrote:
> > > > > > If the Apache License works for those artifacts I think that's
> > > fine...
> > > > >
> > > > > It probably doesn’t, but it's complex and INAL, but I have touched
> on
> > > > this about this in IoT talks at previous ApacheCons...
> > > >
> > > > FWIW the prior discussions that I mentioned are linked below - from
> > > > board@ so accessible for ASF Members of Officers only, but we can
> > > > distill them as needed if a concrete need appears with TVM.
> > > >
> > > > We didn't go past the discussions stage at that time (2011) but if
> > > > there's another case of hardware at the ASF I'm willing to help
> > > > restart those discussions to move this forward. Either to define
> which
> > > > additions to the Apache License are required, or to clarify that it's
> > > > ok as is.
> > > >
> > > > So unless there are specific objections about accepting a project
> > > > which includes hardware as a software artifact I'm in favor of
> > > > accepting TVM and sorting out these things during incubation.
> > > >
> > > > -Bertrand
> > > >
> > > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > > https://s.apache.org/hw2011_2
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail: general-help@incubator.apache.org
> > > >
> > > >
> > >
> >
>

Re: [Proposal] Apache TVM

Posted by Henry Saputra <he...@gmail.com>.
Bit more clarifications, as new podling in Apache, the initial members of
PPMC consist of mentors and initial commiters of the project.

I understand TVM already work mirroring ASF meritoracy [1] but we need to
change the proposal to follow Apache guidelines to help us cross check
membership later for onboarding.

If it is OK with you I will change the proposal to merge the "Initial PPMC
Members" and "Initial Committers", minus the mentors from ASF, to be just
Initial Committers.

Thanks,

- Henry


[1] https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md

On Tue, Feb 26, 2019 at 9:56 AM Markus Weimer <we...@apache.org> wrote:

> Thanks everyone for the discussion thus far. Based on it, I have uploaded
> an updated proposal here:
>
> https://wiki.apache.org/incubator/TVMProposal
>
> The changes made are:
>
>    1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
>    pointing that out!
>    2. Adding Furkan, Timothy and Henry as additional mentors. We can use
>    all the help :)
>
> Assuming there are no further discussion points, I'd like to move forward
> with a [VOTE]. I'll let this sit here and simmer for another 24h to make
> sure we are done with the discussion phase.
>
> Thanks,
>
> Markus
>
>
> On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen <tq...@apache.org> wrote:
>
> > Thanks, everyone for helpful feedbacks. I would like to clarify a few
> > points being raised so far on behalf of the current TVM PMC.
> >
> > > PMC vs PMC member
> >
> > Thanks for pointing it out. This is something we overlooked and will
> update
> > the proposal to make the change accordingly.
> >
> > > Champion
> >
> > Markus has been actively engaging with the TVM community and helped the
> > community start the incubation process. These efforts include:
> > - Introduce the Apache way to in the TVM conference last Dec
> >    -
> >
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> > - Help the community to start the incubation conversation(also Thanks to
> > Sebastian and Gon)
> >    - https://github.com/dmlc/tvm/issues/2401
> > - Watch the pre-incubation private list, and give helpful feedback
> >
> > While we do not expect our mentor to actively watch the community on the
> > daily basis(many of our committers only contribute a few days in a week),
> > he has been very responsive and helped us to shape the incubation
> proposal
> > and most importantly be a strong advocate of the Apache way. I personally
> > think he is more than qualified as our champion:)
> >
> > > Hardware artifact
> >
> > INAL, however, given that Apache only releases source code and our source
> > code is in the form of software source code (HLS C and we are moving to
> > Chisel-(scala) ). Then anyone can take the software source code and
> > generate unofficial hardware release.
> >
> > Tianqi
> >
> >
> > On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> > bdelacretaz@codeconsult.ch> wrote:
> >
> > > Hi,
> > >
> > > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <
> justin@classsoftware.com
> > >
> > > wrote:
> > > > > If the Apache License works for those artifacts I think that's
> > fine...
> > > >
> > > > It probably doesn’t, but it's complex and INAL, but I have touched on
> > > this about this in IoT talks at previous ApacheCons...
> > >
> > > FWIW the prior discussions that I mentioned are linked below - from
> > > board@ so accessible for ASF Members of Officers only, but we can
> > > distill them as needed if a concrete need appears with TVM.
> > >
> > > We didn't go past the discussions stage at that time (2011) but if
> > > there's another case of hardware at the ASF I'm willing to help
> > > restart those discussions to move this forward. Either to define which
> > > additions to the Apache License are required, or to clarify that it's
> > > ok as is.
> > >
> > > So unless there are specific objections about accepting a project
> > > which includes hardware as a software artifact I'm in favor of
> > > accepting TVM and sorting out these things during incubation.
> > >
> > > -Bertrand
> > >
> > > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > > https://s.apache.org/hw2011_2
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: general-help@incubator.apache.org
> > >
> > >
> >
>

Re: [Proposal] Apache TVM

Posted by Markus Weimer <we...@apache.org>.
Thanks everyone for the discussion thus far. Based on it, I have uploaded
an updated proposal here:

https://wiki.apache.org/incubator/TVMProposal

The changes made are:

   1. Rectify the language around PMC vs. PMC member. Thanks Greg, for
   pointing that out!
   2. Adding Furkan, Timothy and Henry as additional mentors. We can use
   all the help :)

Assuming there are no further discussion points, I'd like to move forward
with a [VOTE]. I'll let this sit here and simmer for another 24h to make
sure we are done with the discussion phase.

Thanks,

Markus


On Mon, Feb 18, 2019 at 1:08 PM Tianqi Chen <tq...@apache.org> wrote:

> Thanks, everyone for helpful feedbacks. I would like to clarify a few
> points being raised so far on behalf of the current TVM PMC.
>
> > PMC vs PMC member
>
> Thanks for pointing it out. This is something we overlooked and will update
> the proposal to make the change accordingly.
>
> > Champion
>
> Markus has been actively engaging with the TVM community and helped the
> community start the incubation process. These efforts include:
> - Introduce the Apache way to in the TVM conference last Dec
>    -
> https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
> - Help the community to start the incubation conversation(also Thanks to
> Sebastian and Gon)
>    - https://github.com/dmlc/tvm/issues/2401
> - Watch the pre-incubation private list, and give helpful feedback
>
> While we do not expect our mentor to actively watch the community on the
> daily basis(many of our committers only contribute a few days in a week),
> he has been very responsive and helped us to shape the incubation proposal
> and most importantly be a strong advocate of the Apache way. I personally
> think he is more than qualified as our champion:)
>
> > Hardware artifact
>
> INAL, however, given that Apache only releases source code and our source
> code is in the form of software source code (HLS C and we are moving to
> Chisel-(scala) ). Then anyone can take the software source code and
> generate unofficial hardware release.
>
> Tianqi
>
>
> On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
> bdelacretaz@codeconsult.ch> wrote:
>
> > Hi,
> >
> > On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <justin@classsoftware.com
> >
> > wrote:
> > > > If the Apache License works for those artifacts I think that's
> fine...
> > >
> > > It probably doesn’t, but it's complex and INAL, but I have touched on
> > this about this in IoT talks at previous ApacheCons...
> >
> > FWIW the prior discussions that I mentioned are linked below - from
> > board@ so accessible for ASF Members of Officers only, but we can
> > distill them as needed if a concrete need appears with TVM.
> >
> > We didn't go past the discussions stage at that time (2011) but if
> > there's another case of hardware at the ASF I'm willing to help
> > restart those discussions to move this forward. Either to define which
> > additions to the Apache License are required, or to clarify that it's
> > ok as is.
> >
> > So unless there are specific objections about accepting a project
> > which includes hardware as a software artifact I'm in favor of
> > accepting TVM and sorting out these things during incubation.
> >
> > -Bertrand
> >
> > Prior board@ discussions at https://s.apache.org/hw2011_1 and
> > https://s.apache.org/hw2011_2
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: general-help@incubator.apache.org
> >
> >
>

Re: [Proposal] Apache TVM

Posted by Tianqi Chen <tq...@apache.org>.
Thanks, everyone for helpful feedbacks. I would like to clarify a few
points being raised so far on behalf of the current TVM PMC.

> PMC vs PMC member

Thanks for pointing it out. This is something we overlooked and will update
the proposal to make the change accordingly.

> Champion

Markus has been actively engaging with the TVM community and helped the
community start the incubation process. These efforts include:
- Introduce the Apache way to in the TVM conference last Dec
   -
https://sampl.cs.washington.edu/tvmconf/slides/Markus-Weimer-TVM-Apache.pdf
- Help the community to start the incubation conversation(also Thanks to
Sebastian and Gon)
   - https://github.com/dmlc/tvm/issues/2401
- Watch the pre-incubation private list, and give helpful feedback

While we do not expect our mentor to actively watch the community on the
daily basis(many of our committers only contribute a few days in a week),
he has been very responsive and helped us to shape the incubation proposal
and most importantly be a strong advocate of the Apache way. I personally
think he is more than qualified as our champion:)

> Hardware artifact

INAL, however, given that Apache only releases source code and our source
code is in the form of software source code (HLS C and we are moving to
Chisel-(scala) ). Then anyone can take the software source code and
generate unofficial hardware release.

Tianqi


On Mon, Feb 18, 2019 at 6:44 AM Bertrand Delacretaz <
bdelacretaz@codeconsult.ch> wrote:

> Hi,
>
> On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <ju...@classsoftware.com>
> wrote:
> > > If the Apache License works for those artifacts I think that's fine...
> >
> > It probably doesn’t, but it's complex and INAL, but I have touched on
> this about this in IoT talks at previous ApacheCons...
>
> FWIW the prior discussions that I mentioned are linked below - from
> board@ so accessible for ASF Members of Officers only, but we can
> distill them as needed if a concrete need appears with TVM.
>
> We didn't go past the discussions stage at that time (2011) but if
> there's another case of hardware at the ASF I'm willing to help
> restart those discussions to move this forward. Either to define which
> additions to the Apache License are required, or to clarify that it's
> ok as is.
>
> So unless there are specific objections about accepting a project
> which includes hardware as a software artifact I'm in favor of
> accepting TVM and sorting out these things during incubation.
>
> -Bertrand
>
> Prior board@ discussions at https://s.apache.org/hw2011_1 and
> https://s.apache.org/hw2011_2
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> For additional commands, e-mail: general-help@incubator.apache.org
>
>

Re: [Proposal] Apache TVM

Posted by Bertrand Delacretaz <bd...@codeconsult.ch>.
Hi,

On Mon, Feb 18, 2019 at 11:44 AM Justin Mclean <ju...@classsoftware.com> wrote:
> > If the Apache License works for those artifacts I think that's fine...
>
> It probably doesn’t, but it's complex and INAL, but I have touched on this about this in IoT talks at previous ApacheCons...

FWIW the prior discussions that I mentioned are linked below - from
board@ so accessible for ASF Members of Officers only, but we can
distill them as needed if a concrete need appears with TVM.

We didn't go past the discussions stage at that time (2011) but if
there's another case of hardware at the ASF I'm willing to help
restart those discussions to move this forward. Either to define which
additions to the Apache License are required, or to clarify that it's
ok as is.

So unless there are specific objections about accepting a project
which includes hardware as a software artifact I'm in favor of
accepting TVM and sorting out these things during incubation.

-Bertrand

Prior board@ discussions at https://s.apache.org/hw2011_1 and
https://s.apache.org/hw2011_2

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by Justin Mclean <ju...@classsoftware.com>.
Hi,

> If the Apache License works for those artifacts I think that's fine, and personally I like it a lot if we're expanding into new fields.

It probably doesn’t, but it's complex and INAL, but I have touched on this about this in IoT talks at previous ApacheCons. TLDR version is some aspects of hardware design are not covered by copyright which is the basis for most open source licenses so ALv2 may not apply. Other organisations have put work into this so we look to them for guidance.

Thanks,
Justin
---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: [Proposal] Apache TVM

Posted by Bertrand Delacretaz <bd...@codeconsult.ch>.
Hi,

On Fri, Feb 15, 2019 at 7:42 PM Markus Weimer <we...@apache.org> wrote:
> ...(3) The project contains hardware as a software artifact. We are not
> aware of another ASF project like that and wonder if and how it
> affects its acceptance into the incubator...

If the Apache License works for those artifacts I think that's fine,
and personally I like it a lot if we're expanding into new fields.

If changes are needed in our processes or anything else I'm happy to
help - I suppose I'll be following this podling anyway, sounds
exciting.

-Bertrand

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org