You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mxnet.apache.org by ju...@apache.org on 2019/10/08 15:51:23 UTC

[incubator-mxnet] branch ir-patch updated (38e8828 -> d3d2b10)

This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


    omit 38e8828  [IR-Patch] IR Bridge (#16290)
     add 3244a7a  Julia: add API docs back (#16363)
     add b6f3235  Fix nightly scala pipeline (#16362)
     add 09ae7df  remove redundant branch name (#16372)
     add 626fc32  Disable Pylint false error in numpy_op_signature  (#16370)
     add 916fbf2  boolean_mask_assign operator for future boolean indexing (#16361)
     add 8096421  Embedding gradient performance optimization on GPU (#16355)
     add 2c81a71  Change mailing list url in footer to point to instructions about how to subscribe instead (#16384)
     add 2127f75  Add instructions to report a security vulnerability (#16383)
     add 09285c8  Implements ldexp. (#15845)
     add 2df3282  Numpy Operators: Inner, Outer, vdot (#15846)
     add 295fc14  Numpy det and slogdet operators (#15861)
     add 4940ec0  Fix random op signature
     add df4125a  update NEWS.md and README.md (#16385)
     new d3d2b10  [IR-Patch] IR Bridge (#16290)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (38e8828)
            \
             N -- N -- N   refs/heads/ir-patch (d3d2b10)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 NEWS.md                                        |  24 ++
 README.md                                      |   1 +
 ci/Jenkinsfile_utils.groovy                    |   4 +-
 ci/docker/Dockerfile.publish.ubuntu1604_cpu    |   2 +
 ci/docker/Dockerfile.publish.ubuntu1604_gpu    |   2 +
 docs/static_site/src/_includes/footer.html     |   3 +-
 docs/static_site/src/pages/api/faq/security.md |  17 +
 julia/docs/src/api/ndarray.md                  |  18 +-
 julia/docs/src/api/symbolic-node.md            |  11 +-
 python/mxnet/_numpy_op_doc.py                  | 122 ++++++
 python/mxnet/initializer.py                    |   8 +-
 python/mxnet/ndarray/numpy/_op.py              | 203 ++++++++-
 python/mxnet/ndarray/numpy/random.py           |  18 +-
 python/mxnet/numpy/multiarray.py               | 187 ++++++++-
 python/mxnet/numpy/random.py                   |   8 +-
 python/mxnet/numpy_op_signature.py             |   1 -
 python/mxnet/symbol/numpy/_symbol.py           | 182 +++++++-
 python/mxnet/symbol/numpy/random.py            |  13 +-
 src/operator/mshadow_op.h                      |  11 +
 src/operator/numpy/np_boolean_mask_assign.cc   | 270 ++++++++++++
 src/operator/numpy/np_boolean_mask_assign.cu   | 229 ++++++++++
 src/operator/numpy/np_broadcast_reduce_op.h    |   9 +
 src/operator/numpy/np_elemwise_broadcast_op.cc |  37 ++
 src/operator/numpy/np_elemwise_broadcast_op.cu |  19 +
 src/operator/operator_tune.cc                  |   5 +
 src/operator/random/sample_op.cc               |   2 -
 src/operator/tensor/indexing_op.cu             | 233 +++++++++++
 src/operator/tensor/la_op.cc                   |   2 +
 src/operator/tensor/la_op.cu                   |   2 +
 src/operator/tensor/la_op.h                    |   7 +-
 tests/python/unittest/test_exc_handling.py     |  15 +-
 tests/python/unittest/test_numpy_gluon.py      |  15 +-
 tests/python/unittest/test_numpy_op.py         | 557 +++++++++++++++++++++----
 33 files changed, 2095 insertions(+), 142 deletions(-)
 create mode 100644 src/operator/numpy/np_boolean_mask_assign.cc
 create mode 100644 src/operator/numpy/np_boolean_mask_assign.cu


[incubator-mxnet] 01/01: [IR-Patch] IR Bridge (#16290)

Posted by ju...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch ir-patch
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit d3d2b1064ff65c2c701c7e55dae61de4e1f76218
Author: Junru Shao <ju...@gmail.com>
AuthorDate: Mon Sep 30 12:11:35 2019 -0700

    [IR-Patch] IR Bridge (#16290)
    
    * ir converter
    
    Add license
    
    Missed something
    
    lint
    
    lintlintlint
    
    * Restore cryptic part of CachedOp
    
    * Update Makefile
    
    * try again for libtvm.so...
    
    * try again
    
    * try once once again
    
    * let's try to fix julia's issue first
    
    * Remove AsText which is not an exposed symbol
    
    * try to bypass amalgamation
    
    * try again
    
    * boy try this
    
    * blacklist tvm to amalgamation.py
---
 3rdparty/tvm                                       |   2 +-
 CMakeLists.txt                                     |   2 +-
 Makefile                                           |  17 +-
 amalgamation/Makefile                              |   4 +-
 amalgamation/amalgamation.py                       |   4 +-
 ci/jenkins/Jenkins_steps.groovy                    |  20 +--
 .../assembly/src/main/assembly/assembly.xml        |   2 +-
 .../apache/mxnet/util/NativeLibraryLoader.scala    |   2 +-
 src/imperative/cached_op.cc                        |  16 +-
 src/v3/src/nnvm_relay_bridge.cc                    | 182 +++++++++++++++++++++
 tests/nightly/JenkinsfileForBinaries               |   4 +-
 .../JenkinsfileForMBCC                             |   2 +-
 12 files changed, 228 insertions(+), 29 deletions(-)

diff --git a/3rdparty/tvm b/3rdparty/tvm
index afd4b3e..18188f4 160000
--- a/3rdparty/tvm
+++ b/3rdparty/tvm
@@ -1 +1 @@
-Subproject commit afd4b3e4450984358e9d79a7e8e578483cb7b017
+Subproject commit 18188f4ba3f53cc1dab765b8a0d932d21db0ae8a
diff --git a/CMakeLists.txt b/CMakeLists.txt
index f441e9b..051dc91 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -744,7 +744,7 @@ endif()
 
 if(USE_TVM_OP)
   add_definitions(-DMXNET_USE_TVM_OP=1)
-  list(APPEND mxnet_LINKER_LIBS ${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm_runtime.so)
+  list(APPEND mxnet_LINKER_LIBS ${CMAKE_CURRENT_BINARY_DIR}/3rdparty/tvm/libtvm.so)
   include(cmake/BuildTVM.cmake)
   add_subdirectory("3rdparty/tvm")
 
diff --git a/Makefile b/Makefile
index b3b188a..bd580ef 100644
--- a/Makefile
+++ b/Makefile
@@ -468,9 +468,9 @@ CUSRC = $(wildcard src/*/*/*/*.cu src/*/*/*.cu src/*/*.cu src/*.cu)
 CUOBJ = $(patsubst %.cu, build/%_gpu.o, $(CUSRC))
 
 ifeq ($(USE_TVM_OP), 1)
-LIB_DEP += lib/libtvm_runtime.so lib/libtvmop.so
+LIB_DEP += lib/libtvm.so lib/libtvmop.so
 CFLAGS += -I$(TVM_PATH)/include -DMXNET_USE_TVM_OP=1
-LDFLAGS += -L$(ROOTDIR)/lib -ltvm_runtime -Wl,-rpath,'$${ORIGIN}'
+LDFLAGS += -L$(ROOTDIR)/lib -ltvm -Wl,-rpath,'$${ORIGIN}'
 
 TVM_USE_CUDA := OFF
 ifeq ($(USE_CUDA), 1)
@@ -618,19 +618,20 @@ $(DMLC_CORE)/libdmlc.a: DMLCCORE
 DMLCCORE:
 	+ cd $(DMLC_CORE); $(MAKE) libdmlc.a USE_SSE=$(USE_SSE) config=$(ROOTDIR)/$(config); cd $(ROOTDIR)
 
-lib/libtvm_runtime.so:
+lib/libtvm.so:
 	echo "Compile TVM"
 	[ -e $(LLVM_PATH)/bin/llvm-config ] || sh $(ROOTDIR)/contrib/tvmop/prepare_tvm.sh; \
 	cd $(TVM_PATH)/build; \
-	cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config" \
+	cmake -DUSE_LLVM="$(LLVM_PATH)/bin/llvm-config --ignore-libllvm" -DHIDE_PRIVATE_SYMBOLS=ON \
+			-DCMAKE_SHARED_LINKER_FLAGS="-Wl,--exclude-libs,ALL" \
 		  -DUSE_SORT=OFF -DUSE_CUDA=$(TVM_USE_CUDA) -DUSE_CUDNN=OFF ..; \
 	$(MAKE) VERBOSE=1; \
 	mkdir -p $(ROOTDIR)/lib; \
-	cp $(TVM_PATH)/build/libtvm_runtime.so $(ROOTDIR)/lib/libtvm_runtime.so; \
+	cp $(TVM_PATH)/build/libtvm.so $(ROOTDIR)/lib/libtvm.so; \
 	ls $(ROOTDIR)/lib; \
 	cd $(ROOTDIR)
 
-lib/libtvmop.so: lib/libtvm_runtime.so $(wildcard contrib/tvmop/*/*.py contrib/tvmop/*.py)
+lib/libtvmop.so: lib/libtvm.so $(wildcard contrib/tvmop/*/*.py contrib/tvmop/*.py)
 	echo "Compile TVM operators"
 	PYTHONPATH=$(TVM_PATH)/python:$(TVM_PATH)/topi/python:$(ROOTDIR)/contrib \
 		LD_LIBRARY_PATH=$(ROOTDIR)/lib \
@@ -696,8 +697,8 @@ rpkg:
 		cp -rf lib/libmklml_intel.so R-package/inst/libs; \
 	fi
 
-	if [ -e "lib/libtvm_runtime.so" ]; then \
-		cp -rf lib/libtvm_runtime.so R-package/inst/libs; \
+	if [ -e "lib/libtvm.so" ]; then \
+		cp -rf lib/libtvm.so R-package/inst/libs; \
 	fi
 
 	mkdir -p R-package/inst/include
diff --git a/amalgamation/Makefile b/amalgamation/Makefile
index 701c1f1..f45ebfc 100644
--- a/amalgamation/Makefile
+++ b/amalgamation/Makefile
@@ -49,7 +49,7 @@ endif
 .PHONY: all clean
 
 DEFS+=-DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DDMLC_LOG_STACK_TRACE=0
-DEFS+=-DMSHADOW_FORCE_STREAM -DMXNET_USE_OPENCV=0 -DMXNET_PREDICT_ONLY=1
+DEFS+=-DMSHADOW_FORCE_STREAM -DMXNET_USE_OPENCV=0 -DMXNET_PREDICT_ONLY=1 -DMXNET_AMALGAMATION=1
 CFLAGS=-std=c++11 -Wno-unknown-pragmas -Wall $(DEFS)
 
 # if architecture of the CPU supports F16C instruction set, enable USE_F16C for fast fp16 computation on CPU
@@ -120,7 +120,7 @@ else
 endif
 
 libmxnet_predict.js: mxnet_predict-all.cc
-	${EMCC} -std=c++11 -O2 $(DEFS) -DMSHADOW_USE_SSE=0 -D__MXNET_JS__  -o $@ $+ \
+	${EMCC} -std=c++11 -O2 $(DEFS) -DMSHADOW_USE_SSE=0 -D__MXNET_JS__ -o $@ $+ \
 	-s EXPORTED_FUNCTIONS="['_MXPredCreate', \
 	                        '_MXPredGetOutputShape', \
 	                        '_MXPredSetInput', \
diff --git a/amalgamation/amalgamation.py b/amalgamation/amalgamation.py
index 5f825de..8d1cd6f 100644
--- a/amalgamation/amalgamation.py
+++ b/amalgamation/amalgamation.py
@@ -170,6 +170,7 @@ def expand(x, pending, stage):
             if not source:
                 if (h not in blacklist and
                     h not in sysheaders and
+                    'tvm' not in h and
                     'mkl' not in h and
                     'nnpack' not in h and
                     'tensorrt' not in h and
@@ -190,7 +191,8 @@ expand.fileCount = 0
 
 # Expand the stages
 expand(sys.argv[2], [], "3rdparty/dmlc-core")
-expand(sys.argv[3], [], "3rdparty/tvm/nnvm")
+expand(sys.argv[3], [], "3rdparty/tvm")
+expand(sys.argv[3], [], "3rdparty/nnvm")
 expand(sys.argv[4], [], "src")
 
 # Write to amalgamation file
diff --git a/ci/jenkins/Jenkins_steps.groovy b/ci/jenkins/Jenkins_steps.groovy
index 30db322..48cabeb 100644
--- a/ci/jenkins/Jenkins_steps.groovy
+++ b/ci/jenkins/Jenkins_steps.groovy
@@ -23,22 +23,22 @@
 utils = load('ci/Jenkinsfile_utils.groovy')
 
 // mxnet libraries
-mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
-mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm.so, lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
+mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm.so, lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 
 // Python wheels
 mx_pip = 'build/*.whl'
 
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static library by default.
-mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
-mx_cmake_lib_cython = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
+mx_cmake_lib_cython = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static library by default.
-mx_cmake_lib_debug = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests'
-mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, build/3rdparty/mkldnn/src/libmkldnn.so.0'
-mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, libsample_lib.so, lib/libiomp5.so, lib/libmkldnn.so.0, lib/libmklml_intel.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
-mx_tensorrt_lib = 'build/libmxnet.so, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, lib/libnvonnxparser_runtime.so.0, lib/libnvonnxparser.so.0, lib/libonnx_proto.so, lib/libonnx.so'
-mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
-mx_lib_cpp_examples_cpu = 'build/libmxnet.so, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/cpp-package/example/*'
+mx_cmake_lib_debug = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm.so, build/libtvmop.so, build/libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests'
+mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, build/3rdparty/mkldnn/src/libmkldnn.so.0'
+mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm.so, lib/libtvmop.so, libsample_lib.so, lib/libiomp5.so, lib/libmkldnn.so.0, lib/libmklml_intel.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
+mx_tensorrt_lib = 'build/libmxnet.so, build/3rdparty/tvm/libtvm.so, build/libtvmop.so, lib/libnvonnxparser_runtime.so.0, lib/libnvonnxparser.so.0, lib/libonnx_proto.so, lib/libonnx.so'
+mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm.so, lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cpp_examples_cpu = 'build/libmxnet.so, build/3rdparty/tvm/libtvm.so, build/libtvmop.so, build/cpp-package/example/*'
 
 // Python unittest for CPU
 // Python 2
diff --git a/scala-package/assembly/src/main/assembly/assembly.xml b/scala-package/assembly/src/main/assembly/assembly.xml
index bcc5408..0588244 100644
--- a/scala-package/assembly/src/main/assembly/assembly.xml
+++ b/scala-package/assembly/src/main/assembly/assembly.xml
@@ -54,7 +54,7 @@
       <directory>${MXNET_DIR}/lib</directory>
       <includes>
         <include>libmxnet.so</include>
-        <include>libtvm_runtime.so</include>
+        <include>libtvm.so</include>
         <include>libgfortran.so.3</include>
         <include>libquadmath.so.0</include>
         <include>libiomp5.so</include>
diff --git a/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala b/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala
index 9609ba2..5d95745 100644
--- a/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala
+++ b/scala-package/core/src/main/scala/org/apache/mxnet/util/NativeLibraryLoader.scala
@@ -86,7 +86,7 @@ private[mxnet] object NativeLibraryLoader {
     logger.debug(s"Attempting to load $loadLibname")
     val libFileInJar = libPathInJar + loadLibname
     saveLibraryToTemp("libmxnet.so", "/lib/native/libmxnet.so", true)
-    saveLibraryToTemp("libtvm_runtime.so", "/lib/native/libtvm_runtime.so", false)
+    saveLibraryToTemp("libtvm.so", "/lib/native/libtvm.so", false)
     saveLibraryToTemp("libgfortran.so.3", "/lib/native/libgfortran.so.3", false)
     saveLibraryToTemp("libquadmath.so.0", "/lib/native/libquadmath.so.0", false)
     saveLibraryToTemp("libiomp5.so", "/lib/native/libiomp5.so", false)
diff --git a/src/imperative/cached_op.cc b/src/imperative/cached_op.cc
index 6818d75..14e9527 100644
--- a/src/imperative/cached_op.cc
+++ b/src/imperative/cached_op.cc
@@ -25,6 +25,18 @@
 #include "../operator/operator_common.h"
 #include "../operator/subgraph/common.h"
 
+#if MXNET_USE_TVM_OP
+#ifndef MXNET_AMALGAMATION
+#include <tvm/node/node.h>
+namespace mxnet {
+namespace v3 {
+namespace nnvm_relay_bridge {
+tvm::NodeRef NNVMToRelay(const nnvm::Graph &g);
+}  // namespace nnvm_relay_bridge
+}  // namespace v3
+}  // namespace mxnet
+#endif  // MXNET_AMALGAMATION
+#endif  // MXNET_USE_TVM_OP
 
 namespace mxnet {
 
@@ -312,7 +324,9 @@ bool CachedOp::SetForwardGraph(
   using namespace imperative;
   CHECK_EQ(inputs.size(), num_inputs());
   nnvm::Graph& g = info->fwd_graph;
-
+#if MXNET_USE_TVM_OP && !defined MXNET_AMALGAMATION
+  v3::nnvm_relay_bridge::NNVMToRelay(g);
+#endif  // MXNET_USE_TVM_OP && !define MXNET_AMALGAMATION
   ShapeVector shape_inputs;
   DTypeVector dtype_inputs;
   StorageTypeVector storage_type_inputs;
diff --git a/src/v3/src/nnvm_relay_bridge.cc b/src/v3/src/nnvm_relay_bridge.cc
new file mode 100644
index 0000000..298ce65
--- /dev/null
+++ b/src/v3/src/nnvm_relay_bridge.cc
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * Copyright (c) 2019 by Contributors
+ * \file nnvm_relay_bridge.cc
+ * \author Junru Shao
+ */
+#if MXNET_USE_TVM_OP
+#ifndef MXNET_AMALGAMATION
+#include <nnvm/graph.h>
+#include <tvm/relay/expr.h>
+#include <tvm/relay/op.h>
+#include <tvm/node/container.h>
+#include <tvm/node/node.h>
+
+namespace mxnet {
+namespace v3 {
+namespace nnvm_relay_bridge {
+
+using tvm::relay::Expr;
+using tvm::relay::TupleGetItemNode;
+using tvm::relay::FunctionNode;
+using tvm::relay::Var;
+using tvm::relay::VarNode;
+using tvm::relay::CallNode;
+using tvm::relay::TupleNode;
+using tvm::relay::LetNode;
+using tvm::NodeRef;
+using tvm::Array;
+
+static void PrintIndexedGraph(const nnvm::Graph &g) {
+  const auto &idx = g.indexed_graph();
+  std::unordered_set<int> input_nodes(idx.input_nodes().begin(),
+                                      idx.input_nodes().end());
+  std::cout << idx.num_nodes() << " nodes, " << input_nodes.size()
+            << " input nodes" << std::endl;
+  int n_nodes = idx.num_nodes();
+  for (int i = 0, input_cnt = 0; i < n_nodes; ++i) {
+    const nnvm::Node *node = idx[i].source;
+    const nnvm::Op *op = node->op();
+    std::string op_name = op ? op->name : "None";
+    if (input_nodes.count(i)) {
+      input_cnt += 1;
+      op_name = (op ? op->name + " [input " : "[input ") + std::to_string(input_cnt) + "]";
+    } else {
+      op_name = op ? op->name : "None";
+    }
+    std::cout << "  i = " << i << ", op = " << op_name
+              << ", #(input node entries) = " << idx[i].inputs.size()
+              << std::endl;
+    int j_cnt = 0;
+    for (const nnvm::IndexedGraph::NodeEntry &j : idx[i].inputs) {
+      std::cout << "    input entry #" << ++j_cnt
+                << ", entry_id = " << idx.entry_id(j)
+                << ", (node_id = " << j.node_id << ", index = " << j.index
+                << ", version = " << j.version << ")"
+                << std::endl;
+    }
+    for (int j_cnt = 0, n_out = node->num_outputs(); j_cnt < n_out; ++j_cnt) {
+      uint32_t entry_id = idx.entry_id(i, j_cnt);
+      std::cout << "    output entry #" << j_cnt + 1
+                << ", entry_id = " << entry_id
+                << std::endl;
+    }
+  }
+  std::cout << idx.outputs().size() << " output node entries: "
+            << std::endl;
+  int j_cnt = 0;
+  for (const nnvm::IndexedGraph::NodeEntry &j : idx.outputs()) {
+    std::cout << "  output entry #" << ++j_cnt
+              << ", entry_id = " << idx.entry_id(j)
+              << ", (node_id = " << j.node_id << ", index = " << j.index
+              << ", version = " << j.version << ")"
+              << std::endl;
+  }
+}
+
+NodeRef NNVMToRelay(const nnvm::Graph &g) {
+  PrintIndexedGraph(g);
+  const auto &idx = g.indexed_graph();
+  int n_nodes = idx.num_nodes();
+  // maps: node -> var
+  std::vector<Var> node2var(n_nodes);
+  // maps: (node, output_index) -> var
+  std::vector<std::vector<Var> > entries(n_nodes);
+  // maps: node -> #outputs of the node
+  std::vector<int> n_outputs(n_nodes);
+  for (int node_id = 0, input_cnt = 0, compute_cnt = 0; node_id < n_nodes; ++node_id) {
+    const nnvm::Node *node = idx[node_id].source;
+    int n_out = node->num_outputs();
+    n_outputs[node_id] = n_out;
+    std::string name = node->is_variable() ?
+      "arg_" + std::to_string(++input_cnt) :
+      "x_" + std::to_string(++compute_cnt);
+    Var var = node2var[node_id] = VarNode::make(name, {});
+    std::vector<Var> &outputs = entries[node_id];
+    if (n_out == 1) {
+      outputs.push_back(var);
+    } else {
+      outputs.reserve(n_out);
+      for (int i = 0; i < n_out; ++i) {
+        outputs.push_back(VarNode::make(name + "#" + std::to_string(i), {}));
+      }
+    }
+  }
+  // Create the let list
+  std::vector<std::pair<Var, Expr> > let_list;
+  for (int node_id = 0; node_id < n_nodes; ++node_id) {
+    const Var &var = node2var[node_id];
+    const nnvm::IndexedGraph::Node &node = idx[node_id];
+    int n_out = n_outputs[node_id];
+    if (node.source->is_variable()) {
+      CHECK_EQ(n_out, 1) << "InternalError: internal assumption violation";
+      continue;
+    }
+    // Create call_args
+    std::vector<Expr> call_args;
+    for (const nnvm::IndexedGraph::NodeEntry &input : node.inputs) {
+      CHECK_LT((int)input.node_id, node_id) << "InternalError: IndexedGraph is not topo-sorted";
+      call_args.push_back(entries[input.node_id][input.index]);
+    }
+    // TODO(@junrushao1994): map attrs
+    // Add a CallNode
+    let_list.push_back({var, CallNode::make(tvm::relay::Op::Get("add"), call_args)});
+    // Add logic for de-tuple
+    if (n_out > 1) {
+      for (int index = 0; index < n_out; ++index) {
+        let_list.push_back(std::make_pair(
+          entries[node_id][index],
+          TupleGetItemNode::make(var, index)));
+      }
+    }
+  }
+  // Find input arguments to the function
+  Array<Var> params;
+  for (int node_id = 0; node_id < n_nodes; ++node_id) {
+    const nnvm::Node *node = idx[node_id].source;
+    if (node->is_variable()) {
+      params.push_back(node2var[node_id]);
+    }
+  }
+  // Find outputs of the function
+  Expr body;
+  {
+    // 1) Find outputs
+    Array<Expr> outputs;
+    for (const nnvm::IndexedGraph::NodeEntry &j : idx.outputs()) {
+      outputs.push_back(entries[j.node_id][j.index]);
+    }
+    body = TupleNode::make(std::move(outputs));
+    // 2) Construct let out of let-list
+    for ( ; !let_list.empty(); let_list.pop_back()) {
+      const std::pair<Var, Expr> &last = let_list.back();
+      body = LetNode::make(last.first, last.second, body);
+    }
+  }
+  // Then we are able to construct the function
+  return FunctionNode::make(std::move(params), std::move(body), {}, {}, {});
+}
+
+}  // namespace nnvm_relay_bridge
+}  // namespace v3
+}  // namespace mxnet
+#endif  // MXNET_AMALGAMATION
+#endif  // MXNET_USE_TVM_OP
diff --git a/tests/nightly/JenkinsfileForBinaries b/tests/nightly/JenkinsfileForBinaries
index 5158274..e825492 100755
--- a/tests/nightly/JenkinsfileForBinaries
+++ b/tests/nightly/JenkinsfileForBinaries
@@ -18,8 +18,8 @@
 //
 //This is a Jenkinsfile for nightly tests. The format and some functions have been picked up from the top-level Jenkinsfile
 
-mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, lib/libtvmop.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
-mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
+mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm.so, lib/libtvmop.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
+mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/tvm/libtvm.so, build/libtvmop.so, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
 
 node('utility') {
   // Loading the utilities requires a node context unfortunately
diff --git a/tests/nightly/model_backwards_compatibility_check/JenkinsfileForMBCC b/tests/nightly/model_backwards_compatibility_check/JenkinsfileForMBCC
index 725261d..7d95e3c 100644
--- a/tests/nightly/model_backwards_compatibility_check/JenkinsfileForMBCC
+++ b/tests/nightly/model_backwards_compatibility_check/JenkinsfileForMBCC
@@ -18,7 +18,7 @@
 //
 //This is a Jenkinsfile for the model backwards compatibility checker. The format and some functions have been picked up from the top-level Jenkinsfile.
 
-mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so,lib/libtvmop.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
+mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm.so,lib/libtvmop.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a'
 
 node('restricted-utility') {
   // Loading the utilities requires a node context unfortunately