You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by jr...@apache.org on 2020/11/06 02:53:01 UTC

[incubator-tvm] branch cargo-build updated (f10ab21 -> 1604de7)

This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch cargo-build
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


 discard f10ab21  Debug segfault from loading Python
 discard b8dcc35  WIP
 discard a9ee3cb  WIP
 discard 49246bf  WIP
 discard 8e295b7  Fix some CR
 discard 1874350  More cleanup
 discard 6828374  Fix the extension code
 discard 04a9779  Format and cleanup
 discard 0cabfdc  Remove type checker
 discard 6e13467  Rust Diagnostics work
 discard 4261461  Fix
 discard eeb86c6  Fix calling
 discard 4cd1bbc  Improve Rust bindings
 discard 20c6a28  Clean up exporting to show off new diagnostics
 discard e0f9801  Fix Linux build
 discard db24553  Update CMake and delete old API
 discard b2b59c2  Borrow code from Egg
 discard 131e40a  Hacking on Rust inside of TVM
 discard cb37856  WIP
 discard 77ba309  Codespan example almost working
 discard 1097cbf  Add initial boilerplate for Rust diagnostic interface.
     add 0ce55cb  [CI] Keras version upgraded from 2.3.1 to 2.4.3 (#6793)
     add 9a32e70  [TVMSCRIPT] Add synr dependency in preparation for tvmscript diagnostics overhaul. (#6795)
     add 7196eb8  [BYOC] Allow custom codegens to register their own constant updater (#6697)
     add a261454  [AutoScheduler] Relay integration : Task extraction (#6710)
     add 2625866  Fix mutate auto unroll (#6807)
     add 50fc938  [CI] Pin h5py version to < 3.0 to workaround issues with TF/Keras (#6808)
     add ceef616  Extract channels from weight shape for conv2d. (#6805)
     add f956c38  [µTVM] Add serial transport, parameterize µTVM Zephyr test, run on physical HW (#6789)
     add 616bad2  [CI] Add m6g instance (ARM64) to mainline CI (#6804)
     add 0dc7de5  [CI] Move back Keras to 2.4.3 (#6810)
     add b07ddea  [CI] Update to latest (#6812)
     add 9d506ad  [OBJECT] Update types slots for baseexpr and primexpr (#6814)
     add 883954e  [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface. (#6656)
     add 73f425d  TF frontend: add softsign op (#6799)
     add 9f9d475  [TENSORFLOW]Sparse2Dense support (#5767)
     add 9c2d68d  [AutoScheduler] New layout rewrite option: Weight pre-transpose (#6750)
     add 3222cad  Update stale link to new location (#6819)
     add 6dc8e22  [rust][tvm-graph-rt]: maintain error sources when propagating errors, swap Mutex for RwLock (#6815)
     add 01b98c1  Improve AArch64 depthwise convolution through smlal/smlal2 intrinsic (#6711)
     add 174e21a  [CI] Torch 1.7 update to mainline (#6828)
     add 26b2e16  [TF] Fix a bug in _stridedSlice() (#6829)
     add 896cb10  [CI] remove unused environment var (#6824)
     add 8877ed5  [TVMC] 'tvmc tune' --rpc-tracker and --rpc-tracker fail due to argparse misconfiguration (#6822)
     add b4db112  Fix Annotate Target to support freevars(relay.zeros, relay.ones etc) of any size (including zero)  (#6826)
     add 4122def  [DOCS] Enable theme with header and footer. (#6834)
     add 9b97b56  Update link (#6838)
     add 6e36fc4  [BYOC] FTVMAnnotateTarget method signature update (#6786)
     add db28544  [CI] Disable flaky tests (#6841)
     add 0c02780  [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow (#6685)
     add 237744d  Register shape functions for some image related ops (#6373)
     add 8370396  [TopHub] Bump the versions (#6837)
     add 6019db2  [Graph memory plan] Support nested tuples (#6809)
     add 5471cd2  [CI] Add python setup script (#6844)
     add 47d9415  Syntax error String::fromwe() should be String::from() (#6846)
     add b8761ed  [AutoScheduler] Bug fix for layout rewrite CI error in i386 (#6830)
     add 3ff0100  [CI] Add more guidelines about local setup (#6848)
     add 8013a23  [FIX] Add task_ci_python_setup.sh to the arm CI (#6850)
     add c475dff  Update SimplifyInference documentation (#6853)
     add 7291a92  [µTVM] Add virtual machine, test zephyr runtime on real hardware (#6703)
     add a4bd5f8  [Rust][IRModule] Flesh out IRModule methods (#6741)
     add 7ee91da  [TOPI] Enable scatter_add on GPU  (#6856)
     add 9ea4bf5  [Relay][Frontend][Onnx] If Operator Support (#6730)
     add b31f4ae  [QNN] Dynamic scale, zero point in qnn.op.dequantize (#6849)
     add d164aac  [TVMSCRIPT] Using diagnostics for TVM Script (#6797)
     add 0469a77  [BYOC] [ACL] ACL Runtime padding workaround (#6724)
     new 35d49bf  Debug segfault from loading Python
     new 1604de7  SciPy causes crashes

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (f10ab21)
            \
             N -- N -- N   refs/heads/cargo-build (1604de7)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt                                     |   2 +-
 Jenkinsfile                                        |  42 +-
 apps/{howto_deploy => microtvm}/README.md          |  17 +-
 apps/microtvm/reference-vm/.gitignore              |   1 +
 apps/microtvm/reference-vm/README.md               |  67 ++
 apps/microtvm/reference-vm/base-box-tool.py        | 407 +++++++++
 apps/microtvm/reference-vm/zephyr/.gitignore       |   1 +
 apps/microtvm/reference-vm/zephyr/Vagrantfile      |  56 ++
 .../reference-vm/zephyr/base-box/.gitignore        |   4 +
 .../zephyr/base-box/Vagrantfile.packer-template}   |  37 +-
 .../microtvm/reference-vm/zephyr/base-box/setup.sh | 102 +++
 apps/microtvm/reference-vm/zephyr/pyproject.toml   | 140 +++
 .../microtvm/reference-vm/zephyr/rebuild-tvm.sh    |  18 +-
 apps/microtvm/reference-vm/zephyr/setup.sh         |  41 +
 cmake/config.cmake                                 |   4 -
 cmake/modules/LLVM.cmake                           |   2 +
 cmake/modules/RustExt.cmake                        |  21 +-
 docker/{Dockerfile.ci_i386 => Dockerfile.ci_arm}   |  16 +-
 docker/README.md                                   |   2 +-
 docker/bash.sh                                     |   7 +-
 docker/build.sh                                    |   1 +
 docker/install/ubuntu_install_onnx.sh              |   2 +-
 docker/install/ubuntu_install_python_package.sh    |   2 +-
 docker/install/ubuntu_install_tensorflow.sh        |   5 +-
 docker/with_the_same_user                          |   1 +
 docs/README.txt                                    |   2 +-
 docs/conf.py                                       |  58 +-
 docs/vta/dev/hardware.rst                          |  12 +-
 docs/vta/dev/index.rst                             |   2 +-
 docs/vta/install.rst                               |   2 +-
 include/tvm/auto_scheduler/compute_dag.h           |  32 +-
 include/tvm/auto_scheduler/transform_step.h        |  46 +-
 include/tvm/ir/expr.h                              |   4 +-
 include/tvm/relay/op_attr_types.h                  |   8 +-
 include/tvm/relay/transform.h                      |   5 +-
 python/setup.py                                    |   2 +-
 python/tvm/auto_scheduler/__init__.py              |   6 +
 python/tvm/auto_scheduler/compute_dag.py           |  16 +-
 python/tvm/auto_scheduler/cost_model/xgb_model.py  |  22 +-
 python/tvm/auto_scheduler/dispatcher.py            | 275 ++++++
 .../generic/sort.py => auto_scheduler/env.py}      |  48 +-
 python/tvm/auto_scheduler/measure.py               | 124 +--
 python/tvm/auto_scheduler/relay_integration.py     | 232 +++++
 python/tvm/auto_scheduler/search_policy.py         |  11 +-
 python/tvm/auto_scheduler/utils.py                 |   5 +-
 python/tvm/auto_scheduler/workload_registry.py     | 119 +--
 python/tvm/autotvm/tophub.py                       |   6 +-
 python/tvm/driver/tvmc/autotuner.py                |   2 -
 python/tvm/exec/microtvm_debug_shell.py            | 152 ++++
 python/tvm/micro/contrib/zephyr.py                 |  37 +-
 python/tvm/micro/debugger.py                       | 173 +++-
 python/tvm/micro/session.py                        |  36 +-
 python/tvm/micro/transport/base.py                 |  20 +-
 python/tvm/micro/transport/debug.py                |   4 +-
 python/tvm/micro/transport/file_descriptor.py      |   2 +-
 python/tvm/micro/transport/serial.py               | 128 +++
 python/tvm/relay/backend/compile_engine.py         |  27 +-
 python/tvm/relay/frontend/__init__.py              |   3 -
 python/tvm/relay/frontend/onnx.py                  |  44 +-
 python/tvm/relay/frontend/pytorch.py               |  72 +-
 python/tvm/relay/frontend/tensorflow.py            |  80 +-
 python/tvm/relay/op/_transform.py                  |   2 +-
 python/tvm/relay/op/contrib/arm_compute_lib.py     |  89 +-
 python/tvm/relay/op/contrib/coreml.py              |   3 +-
 python/tvm/relay/op/contrib/dnnl.py                |   2 +-
 python/tvm/relay/op/contrib/ethosn.py              |  21 +-
 python/tvm/relay/op/contrib/tensorrt.py            | 102 ++-
 python/tvm/relay/op/image/_image.py                |  76 ++
 python/tvm/relay/op/nn/nn.py                       |   6 +-
 python/tvm/relay/op/op.py                          |  25 +
 python/tvm/relay/op/strategy/cuda.py               |  28 +-
 python/tvm/relay/op/strategy/generic.py            |  17 +-
 python/tvm/relay/qnn/op/qnn.py                     |   7 +
 python/tvm/relay/transform/transform.py            |   4 +
 python/tvm/rpc/server.py                           |  12 +-
 python/tvm/script/context_maintainer.py            |   4 +-
 python/tvm/script/diagnostics.py                   |  54 ++
 python/tvm/script/meta_unparser.py                 |  31 +-
 python/tvm/script/parser.py                        | 979 +++++++++++----------
 python/tvm/script/scope_handler.py                 |  20 +-
 python/tvm/script/special_stmt.py                  |  18 +-
 python/tvm/target/target.py                        |  17 +-
 python/tvm/topi/arm_cpu/depthwise_conv2d.py        |  65 +-
 python/tvm/topi/arm_cpu/tensor_intrin.py           |  90 ++
 python/tvm/topi/cuda/scatter.py                    | 133 ++-
 python/tvm/topi/cuda/sparse.py                     |  13 +-
 .../topi/testing/conv1d_transpose_ncw_python.py    |   3 +-
 python/tvm/topi/testing/conv2d_hwcn_python.py      |   2 +-
 rust/Cargo.toml                                    |   1 -
 rust/compiler-ext/Cargo.toml                       |  20 +-
 rust/compiler-ext/src/lib.rs                       |   5 +-
 rust/tvm-graph-rt/src/errors.rs                    |  14 +-
 rust/tvm-graph-rt/src/graph.rs                     |  49 +-
 rust/tvm-graph-rt/src/module/syslib.rs             |  10 +-
 rust/tvm-graph-rt/tests/test_wasm32/Cargo.toml     |   2 +-
 rust/tvm-macros/Cargo.toml                         |   2 +-
 rust/tvm-macros/src/external.rs                    |  17 +-
 rust/tvm-macros/src/object.rs                      |  13 +-
 rust/tvm-rt/src/array.rs                           |  19 +-
 rust/tvm-rt/src/function.rs                        |   8 -
 rust/tvm-rt/src/map.rs                             |   2 -
 rust/tvm-rt/src/ndarray.rs                         |   2 +-
 rust/tvm-rt/src/object/mod.rs                      |   1 +
 rust/tvm-rt/src/object/object_ptr.rs               |  40 +-
 rust/tvm-rt/src/string.rs                          |   6 +-
 rust/tvm/src/bin/tyck.rs                           |  22 +-
 rust/tvm/src/ir/arith.rs                           |   4 +-
 rust/tvm/src/ir/attrs.rs                           |   2 +-
 rust/tvm/src/ir/diagnostics/codespan.rs            |  32 +-
 rust/tvm/src/ir/diagnostics/mod.rs                 |  89 +-
 rust/tvm/src/ir/expr.rs                            |  16 +-
 rust/tvm/src/ir/function.rs                        |   2 +-
 rust/tvm/src/ir/module.rs                          | 235 +++--
 rust/tvm/src/ir/op.rs                              |   2 +-
 rust/tvm/src/ir/relay/attrs/nn.rs                  |  14 +-
 rust/tvm/src/ir/relay/attrs/transform.rs           |   2 +-
 rust/tvm/src/ir/relay/mod.rs                       |  83 +-
 rust/tvm/src/ir/source_map.rs                      |  20 +-
 rust/tvm/src/ir/span.rs                            |  42 +-
 rust/tvm/src/ir/tir.rs                             |   2 +-
 rust/tvm/src/ir/ty.rs                              | 127 ++-
 rust/tvm/src/lib.rs                                |   2 +-
 rust/tvm/src/transform.rs                          |   4 +-
 rust/tvm/test.rly                                  |   3 -
 src/auto_scheduler/compute_dag.cc                  | 182 ++--
 src/auto_scheduler/loop_state.cc                   |   6 +
 src/auto_scheduler/search_policy/sketch_policy.cc  |  30 +-
 src/auto_scheduler/search_policy/sketch_policy.h   |   7 +-
 .../search_policy/sketch_policy_rules.cc           |  10 +-
 src/auto_scheduler/transform_step.cc               |  52 ++
 src/contrib/rust_extension.cc                      |   4 +-
 src/ir/diagnostic.cc                               |   4 +
 src/ir/module.cc                                   |   3 +
 src/relay/backend/compile_engine.cc                |  24 +-
 src/relay/backend/contrib/ethosn/codegen_ethosn.h  |   3 +
 src/relay/backend/graph_plan_memory.cc             |   5 +-
 src/relay/backend/graph_runtime_codegen.cc         |   9 +-
 src/relay/backend/utils.h                          |  31 +
 src/relay/backend/vm/compiler.cc                   |   4 +-
 src/relay/qnn/op/dequantize.cc                     |  23 +-
 src/relay/transforms/annotate_target.cc            |  17 +-
 src/runtime/contrib/arm_compute_lib/acl_utils.cc   |   1 +
 src/runtime/contrib/tensorrt/tensorrt_ops.cc       |   2 +-
 src/runtime/micro/micro_session.cc                 |  13 +-
 src/runtime/rpc/rpc_endpoint.cc                    |   8 +
 src/target/source/codegen_c_host.cc                |   5 +-
 src/target/source/codegen_c_host.h                 |   2 +-
 tests/lint/check_file_type.py                      |   3 +
 tests/micro/qemu/.gitignore                        |   2 +-
 .../micro/qemu/conftest.py                         |  21 +-
 tests/micro/qemu/test_zephyr.py                    |  46 +-
 .../contrib/test_arm_compute_lib/infrastructure.py |   3 +-
 .../contrib/test_arm_compute_lib/test_dense.py     |  62 +-
 .../contrib/test_arm_compute_lib/test_maximum.py   |   1 +
 .../contrib/test_arm_compute_lib/test_network.py   |   7 +-
 .../contrib/test_arm_compute_lib/test_pooling.py   |  11 +-
 .../contrib/test_arm_compute_lib/test_reshape.py   |   5 +-
 .../test_ethosn/test_constant_duplication.py       |  82 ++
 tests/python/contrib/test_tensorrt.py              |   1 -
 tests/python/frontend/onnx/test_forward.py         |  49 +-
 tests/python/frontend/pytorch/qnn_test.py          |   3 +-
 tests/python/frontend/pytorch/test_forward.py      |  16 +-
 tests/python/frontend/tensorflow/test_forward.py   | 159 ++++
 tests/python/relay/test_any.py                     |  88 ++
 .../relay/test_auto_scheduler_task_extraction.py   |  90 ++
 tests/python/relay/test_auto_scheduler_tuning.py   |  62 ++
 tests/python/relay/test_backend_graph_runtime.py   |  26 +
 tests/python/relay/test_external_codegen.py        |  34 +
 tests/python/relay/test_op_level3.py               |   4 +-
 tests/python/relay/test_op_qnn_dequantize.py       |  28 +
 tests/python/relay/test_op_qnn_quantize.py         |  28 +
 tests/python/relay/test_pass_annotate_target.py    | 115 ++-
 tests/python/relay/test_pass_partition_graph.py    |  12 +-
 .../topi/python/test_topi_depthwise_conv2d.py      |  54 ++
 tests/python/topi/python/test_topi_sparse.py       |   2 +-
 .../unittest/test_auto_scheduler_cost_model.py     |   2 +-
 .../test_auto_scheduler_evolutionary_search.py     |   4 +-
 .../unittest/test_auto_scheduler_layout_rewrite.py |  82 +-
 .../unittest/test_auto_scheduler_task_scheduler.py |  12 +
 tests/python/unittest/test_target_codegen_x86.py   |   7 +
 .../python/unittest/test_tvmscript_error_report.py | 219 +++--
 .../scripts/task_ci_python_setup.sh                |  16 +-
 ...nfig_build_i386.sh => task_config_build_arm.sh} |   3 +-
 tests/scripts/task_config_build_cpu.sh             |   1 -
 tests/scripts/task_config_build_gpu.sh             |   1 -
 tests/scripts/task_config_build_gpu_vulkan.sh      |   1 -
 tests/scripts/task_config_build_i386.sh            |   1 -
 tests/scripts/task_config_build_qemu.sh            |   1 -
 tests/scripts/task_config_build_wasm.sh            |   1 -
 tests/scripts/task_rust.sh                         |   2 +-
 .../frontend/deploy_object_detection_pytorch.py    |   6 +-
 tutorials/frontend/from_keras.py                   |  26 +-
 tutorials/frontend/from_pytorch.py                 |   6 +-
 tutorials/micro/micro_reference_vm.py              | 139 +++
 vta/python/vta/bitstream.py                        |   2 +-
 vta/tutorials/matrix_multiply.py                   |   6 +-
 vta/tutorials/optimize/convolution_opt.py          |   6 +-
 vta/tutorials/optimize/matrix_multiply_opt.py      |   4 +-
 vta/tutorials/vta_get_started.py                   |   2 +-
 199 files changed, 5909 insertions(+), 1652 deletions(-)
 copy apps/{howto_deploy => microtvm}/README.md (63%)
 create mode 100644 apps/microtvm/reference-vm/.gitignore
 create mode 100644 apps/microtvm/reference-vm/README.md
 create mode 100755 apps/microtvm/reference-vm/base-box-tool.py
 create mode 100644 apps/microtvm/reference-vm/zephyr/.gitignore
 create mode 100644 apps/microtvm/reference-vm/zephyr/Vagrantfile
 create mode 100644 apps/microtvm/reference-vm/zephyr/base-box/.gitignore
 copy apps/{rocm_rpc/Makefile => microtvm/reference-vm/zephyr/base-box/Vagrantfile.packer-template} (52%)
 create mode 100644 apps/microtvm/reference-vm/zephyr/base-box/setup.sh
 create mode 100644 apps/microtvm/reference-vm/zephyr/pyproject.toml
 copy tests/lint/clang_format.sh => apps/microtvm/reference-vm/zephyr/rebuild-tvm.sh (71%)
 create mode 100644 apps/microtvm/reference-vm/zephyr/setup.sh
 copy docker/{Dockerfile.ci_i386 => Dockerfile.ci_arm} (80%)
 create mode 100644 python/tvm/auto_scheduler/dispatcher.py
 copy python/tvm/{topi/generic/sort.py => auto_scheduler/env.py} (51%)
 create mode 100644 python/tvm/auto_scheduler/relay_integration.py
 create mode 100644 python/tvm/exec/microtvm_debug_shell.py
 create mode 100644 python/tvm/micro/transport/serial.py
 create mode 100644 python/tvm/script/diagnostics.py
 delete mode 100644 rust/tvm/test.rly
 copy python/tvm/relay/frontend/pytorch_utils.py => tests/micro/qemu/conftest.py (63%)
 create mode 100644 tests/python/contrib/test_ethosn/test_constant_duplication.py
 create mode 100644 tests/python/relay/test_auto_scheduler_task_extraction.py
 create mode 100644 tests/python/relay/test_auto_scheduler_tuning.py
 copy docker/install/ubuntu1804_install_clang_format.sh => tests/scripts/task_ci_python_setup.sh (68%)
 copy tests/scripts/{task_config_build_i386.sh => task_config_build_arm.sh} (92%)
 create mode 100644 tutorials/micro/micro_reference_vm.py


[incubator-tvm] 01/02: Debug segfault from loading Python

Posted by jr...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch cargo-build
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git

commit 35d49bf279a07aaa92e255bb5d48ae485bda3cdf
Author: Jared Roesch <jr...@octoml.ai>
AuthorDate: Sun Oct 25 17:26:47 2020 -0700

    Debug segfault from loading Python
---
 python/tvm/__init__.py                            |  2 ++
 python/tvm/relay/__init__.py                      |  3 +-
 python/tvm/relay/analysis/__init__.py             |  2 +-
 python/tvm/relay/analysis/analysis.py             |  6 ++--
 python/tvm/relay/analysis/annotated_regions.py    |  2 +-
 python/tvm/relay/analysis/call_graph.py           |  4 +--
 python/tvm/relay/analysis/sparse_dense.py         | 15 ++++----
 python/tvm/relay/backend/graph_runtime_factory.py |  2 +-
 python/tvm/relay/build_module.py                  |  5 ++-
 python/tvm/relay/op/op.py                         | 43 +++++++++++------------
 python/tvm/relay/transform/__init__.py            |  2 +-
 python/tvm/relay/transform/memory_alloc.py        |  7 ++--
 python/tvm/relay/transform/transform.py           |  5 +--
 python/tvm/topi/cuda/__init__.py                  |  2 --
 python/tvm/topi/cuda/sparse.py                    |  3 +-
 rust/tvm-rt/src/map.rs                            | 12 +++++++
 rust/tvm-rt/src/module.rs                         | 16 +++++++++
 rust/tvm-rt/src/to_function.rs                    |  1 +
 rust/tvm/Cargo.toml                               |  2 +-
 rust/tvm/src/python.rs                            | 21 ++++++++---
 src/runtime/module.cc                             |  2 +-
 21 files changed, 101 insertions(+), 56 deletions(-)

diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py
index 569e8f0..60f81f4 100644
--- a/python/tvm/__init__.py
+++ b/python/tvm/__init__.py
@@ -67,6 +67,8 @@ from . import support
 # Contrib initializers
 from .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel
 
+def cleanup():
+    _ffi.base._LIB = None
 
 def tvm_wrap_excepthook(exception_hook):
     """Wrap given excepthook with TVM additional work."""
diff --git a/python/tvm/relay/__init__.py b/python/tvm/relay/__init__.py
index cd96ecc..7e6ed4f 100644
--- a/python/tvm/relay/__init__.py
+++ b/python/tvm/relay/__init__.py
@@ -60,8 +60,7 @@ from . import qnn
 from .scope_builder import ScopeBuilder
 
 # Load Memory Passes
-from .transform import memory_alloc
-from .transform import memory_plan
+from .transform import memory_alloc, memory_plan
 
 # Required to traverse large programs
 setrecursionlimit(10000)
diff --git a/python/tvm/relay/analysis/__init__.py b/python/tvm/relay/analysis/__init__.py
index b4ea7f3..4ea4de7 100644
--- a/python/tvm/relay/analysis/__init__.py
+++ b/python/tvm/relay/analysis/__init__.py
@@ -26,7 +26,7 @@ from .annotated_regions import AnnotatedRegionSet
 from . import call_graph
 from .call_graph import CallGraph
 
-# Feature
+# # Feature
 from . import feature
 from . import sparse_dense
 
diff --git a/python/tvm/relay/analysis/analysis.py b/python/tvm/relay/analysis/analysis.py
index 7e49461..48e9ce0 100644
--- a/python/tvm/relay/analysis/analysis.py
+++ b/python/tvm/relay/analysis/analysis.py
@@ -20,9 +20,9 @@
 This file contains the set of passes for Relay, which exposes an interface for
 configuring the passes and scripting them in Python.
 """
-from tvm.ir import IRModule
-from tvm.relay import transform, build_module
-from tvm.runtime.ndarray import cpu
+from ...ir import IRModule
+from ...relay import transform, build_module
+from ...runtime.ndarray import cpu
 
 from . import _ffi_api
 from .feature import Feature
diff --git a/python/tvm/relay/analysis/annotated_regions.py b/python/tvm/relay/analysis/annotated_regions.py
index 437b97b..a18ccb9 100644
--- a/python/tvm/relay/analysis/annotated_regions.py
+++ b/python/tvm/relay/analysis/annotated_regions.py
@@ -17,7 +17,7 @@
 # pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, unused-import
 """Regions used in Relay."""
 
-from tvm.runtime import Object
+from ...runtime import Object
 from . import _ffi_api
 
 
diff --git a/python/tvm/relay/analysis/call_graph.py b/python/tvm/relay/analysis/call_graph.py
index 966659a..fd9704d 100644
--- a/python/tvm/relay/analysis/call_graph.py
+++ b/python/tvm/relay/analysis/call_graph.py
@@ -17,8 +17,8 @@
 # pylint: disable=no-else-return, unidiomatic-typecheck, invalid-name, unused-import
 """Call graph used in Relay."""
 
-from tvm.ir import IRModule
-from tvm.runtime import Object
+from ...ir import IRModule
+from ...runtime import Object
 from ..expr import GlobalVar
 from . import _ffi_api
 
diff --git a/python/tvm/relay/analysis/sparse_dense.py b/python/tvm/relay/analysis/sparse_dense.py
index d521748..51fab34 100644
--- a/python/tvm/relay/analysis/sparse_dense.py
+++ b/python/tvm/relay/analysis/sparse_dense.py
@@ -22,8 +22,8 @@ to block sparse model
 """
 from collections import namedtuple
 import numpy as np
-import scipy.sparse as sp
-import tvm
+
+from ... import nd, runtime
 from . import _ffi_api
 
 
@@ -73,6 +73,7 @@ def process_params(expr, params, block_size, sparsity_threshold):
     ret : Namedtuple[weight_name: Array[String], weight_shape: Array[Array[IntImm]]]
         return names of qualified dense weight and the shape in BSR format
     """
+    import scipy.sparse as sp
     memo = SparseAnalysisResult(weight_name=[], weight_shape=[])
     weight_names = _search_dense_op_weight(expr)
     for name in weight_names:
@@ -89,11 +90,11 @@ def process_params(expr, params, block_size, sparsity_threshold):
                 + list(sparse_weight.indices.shape)
                 + list(sparse_weight.indptr.shape)
             )
-            params[name + ".data"] = tvm.nd.array(sparse_weight.data)
-            params[name + ".indices"] = tvm.nd.array(sparse_weight.indices)
-            params[name + ".indptr"] = tvm.nd.array(sparse_weight.indptr)
+            params[name + ".data"] = nd.array(sparse_weight.data)
+            params[name + ".indices"] = nd.array(sparse_weight.indices)
+            params[name + ".indptr"] = nd.array(sparse_weight.indptr)
     ret = SparseAnalysisResult(
-        weight_name=tvm.runtime.convert(memo.weight_name),
-        weight_shape=tvm.runtime.convert(memo.weight_shape),
+        weight_name=runtime.convert(memo.weight_name),
+        weight_shape=runtime.convert(memo.weight_shape),
     )
     return ret
diff --git a/python/tvm/relay/backend/graph_runtime_factory.py b/python/tvm/relay/backend/graph_runtime_factory.py
index 4c6ac47..3427a62 100644
--- a/python/tvm/relay/backend/graph_runtime_factory.py
+++ b/python/tvm/relay/backend/graph_runtime_factory.py
@@ -21,7 +21,7 @@ from tvm._ffi.registry import get_global_func
 from tvm.runtime import ndarray
 
 
-class GraphRuntimeFactoryModule(object):
+class GraphRuntimeFactoryModule:
     """Graph runtime factory module.
     This is a module of graph runtime factory
 
diff --git a/python/tvm/relay/build_module.py b/python/tvm/relay/build_module.py
index 35bd8e6..7e32dea 100644
--- a/python/tvm/relay/build_module.py
+++ b/python/tvm/relay/build_module.py
@@ -24,7 +24,7 @@ import numpy as np
 from tvm.ir import IRModule
 
 from tvm.tir import expr as tvm_expr
-from .. import nd as _nd, autotvm
+from .. import nd as _nd, autotvm, register_func
 from ..target import Target
 from ..contrib import graph_runtime as _graph_rt
 from . import _build_module
@@ -186,6 +186,9 @@ class BuildModule(object):
             ret[key] = value.data
         return ret
 
+@register_func("tvm.relay.build")
+def _rust_build_module(mod, target=None, target_host=None, params=None, mod_name="default"):
+    return build(mod, target, target_host, params, mod_name).module
 
 def build(mod, target=None, target_host=None, params=None, mod_name="default"):
     """Helper function that builds a Relay function to run on TVM graph
diff --git a/python/tvm/relay/op/op.py b/python/tvm/relay/op/op.py
index fa420c4..b8c1d69 100644
--- a/python/tvm/relay/op/op.py
+++ b/python/tvm/relay/op/op.py
@@ -16,12 +16,11 @@
 # under the License.
 # pylint: disable=unused-argument,invalid-name
 """The base node types for the Relay language."""
-import tvm._ffi
-import tvm.ir
+from ... import _ffi, ir
 from tvm.auto_scheduler.relay_integration import auto_schedule_topi, auto_schedule_impl_suffix
-from tvm.driver import lower, build
-from tvm.target import get_native_generic_func, GenericFunc
-from tvm.runtime import Object
+from ...driver import lower, build
+from ...target import get_native_generic_func, GenericFunc
+from ...runtime import Object
 from . import _make
 
 
@@ -38,7 +37,7 @@ def get(op_name):
     op : Op
         The op of the corresponding name
     """
-    return tvm.ir.Op.get(op_name)
+    return ir.Op.get(op_name)
 
 
 class OpPattern(object):
@@ -65,7 +64,7 @@ class OpPattern(object):
     OPAQUE = 8
 
 
-@tvm._ffi.register_object("relay.OpImplementation")
+@_ffi.register_object("relay.OpImplementation")
 class OpImplementation(Object):
     """Operator implementation"""
 
@@ -112,12 +111,12 @@ class OpImplementation(Object):
         return _OpImplementationSchedule(self, attrs, outs, target)
 
 
-@tvm._ffi.register_object("relay.OpSpecialization")
+@_ffi.register_object("relay.OpSpecialization")
 class OpSpecialization(Object):
     """Operator specialization"""
 
 
-@tvm._ffi.register_object("relay.OpStrategy")
+@_ffi.register_object("relay.OpStrategy")
 class OpStrategy(Object):
     """Operator strategy"""
 
@@ -208,7 +207,7 @@ def register_compute(op_name, compute=None, level=10):
     level : int
         The priority level
     """
-    return tvm.ir.register_op_attr(op_name, "FTVMCompute", compute, level)
+    return ir.register_op_attr(op_name, "FTVMCompute", compute, level)
 
 
 def register_strategy(op_name, fstrategy=None, level=10):
@@ -229,7 +228,7 @@ def register_strategy(op_name, fstrategy=None, level=10):
     if not isinstance(fstrategy, GenericFunc):
         assert hasattr(fstrategy, "generic_func_node")
         fstrategy = fstrategy.generic_func_node
-    return tvm.ir.register_op_attr(op_name, "FTVMStrategy", fstrategy, level)
+    return ir.register_op_attr(op_name, "FTVMStrategy", fstrategy, level)
 
 
 def register_schedule(op_name, schedule, level=10):
@@ -310,7 +309,7 @@ def register_alter_op_layout(op_name, alter_layout=None, level=10):
     level : int
         The priority level
     """
-    return tvm.ir.register_op_attr(op_name, "FTVMAlterOpLayout", alter_layout, level)
+    return ir.register_op_attr(op_name, "FTVMAlterOpLayout", alter_layout, level)
 
 
 def register_convert_op_layout(op_name, convert_layout=None, level=10):
@@ -327,7 +326,7 @@ def register_convert_op_layout(op_name, convert_layout=None, level=10):
     level : int
         The priority level
     """
-    return tvm.ir.register_op_attr(op_name, "FTVMConvertOpLayout", convert_layout, level)
+    return ir.register_op_attr(op_name, "FTVMConvertOpLayout", convert_layout, level)
 
 
 def register_legalize(op_name, legal_op=None, level=10):
@@ -344,7 +343,7 @@ def register_legalize(op_name, legal_op=None, level=10):
     level : int
         The priority level
     """
-    return tvm.ir.register_op_attr(op_name, "FTVMLegalize", legal_op, level)
+    return ir.register_op_attr(op_name, "FTVMLegalize", legal_op, level)
 
 
 def register_pattern(op_name, pattern, level=10):
@@ -361,7 +360,7 @@ def register_pattern(op_name, pattern, level=10):
     level : int
         The priority level
     """
-    return tvm.ir.register_op_attr(op_name, "TOpPattern", pattern, level)
+    return ir.register_op_attr(op_name, "TOpPattern", pattern, level)
 
 
 def register_gradient(op_name, fgradient=None, level=10):
@@ -378,7 +377,7 @@ def register_gradient(op_name, fgradient=None, level=10):
     level : int
         The priority level
     """
-    return tvm.ir.register_op_attr(op_name, "FPrimalGradient", fgradient, level)
+    return ir.register_op_attr(op_name, "FPrimalGradient", fgradient, level)
 
 
 def register_shape_func(op_name, data_dependant, shape_func=None, level=10):
@@ -400,7 +399,7 @@ def register_shape_func(op_name, data_dependant, shape_func=None, level=10):
         The priority level
     """
     get(op_name).set_attr("TShapeDataDependant", data_dependant, level)
-    return tvm.ir.register_op_attr(op_name, "FShapeFunc", shape_func, level)
+    return ir.register_op_attr(op_name, "FShapeFunc", shape_func, level)
 
 
 def register_external_compiler(op_name, fexternal=None, level=10):
@@ -419,15 +418,15 @@ def register_external_compiler(op_name, fexternal=None, level=10):
     level : int
         The priority level
     """
-    return tvm.ir.register_op_attr(op_name, "FTVMExternalCompiler", fexternal, level)
+    return ir.register_op_attr(op_name, "FTVMExternalCompiler", fexternal, level)
 
 
-@tvm._ffi.register_func("relay.op.compiler._lower")
+_ffi.register_func("relay.op.compiler._lower")
 def _lower(name, schedule, inputs, outputs):
     return lower(schedule, list(inputs) + list(outputs), name=name)
 
 
-@tvm._ffi.register_func("relay.op.compiler._build")
+_ffi.register_func("relay.op.compiler._build")
 def _build(lowered_funcs):
     return build(lowered_funcs, target="llvm")
 
@@ -444,7 +443,7 @@ def debug(expr, debug_func=None):
 
     if debug_func:
         name = "debugger_func{}".format(__DEBUG_COUNTER__)
-        tvm._ffi.register_func(name, debug_func)
+        _ffi.register_func(name, debug_func)
         __DEBUG_COUNTER__ += 1
     else:
         name = ""
@@ -452,4 +451,4 @@ def debug(expr, debug_func=None):
     return _make.debug(expr, name)
 
 
-tvm._ffi._init_api("relay.op", __name__)
+_ffi._init_api("relay.op", __name__)
diff --git a/python/tvm/relay/transform/__init__.py b/python/tvm/relay/transform/__init__.py
index 1d0ea17..9684e42 100644
--- a/python/tvm/relay/transform/__init__.py
+++ b/python/tvm/relay/transform/__init__.py
@@ -19,4 +19,4 @@
 # transformation passes
 from .transform import *
 from .recast import recast
-from . import memory_alloc
+# from . import memory_alloc
diff --git a/python/tvm/relay/transform/memory_alloc.py b/python/tvm/relay/transform/memory_alloc.py
index 66528c8..593a411 100644
--- a/python/tvm/relay/transform/memory_alloc.py
+++ b/python/tvm/relay/transform/memory_alloc.py
@@ -20,14 +20,13 @@ A pass for manifesting explicit memory allocations.
 """
 import numpy as np
 
-from tvm.ir.transform import PassContext, module_pass
-from tvm.relay.transform import InferType
-from tvm import nd, container
+from ... import DataType, register_func, nd, container, cpu
+from ...ir.transform import PassContext, module_pass
+from . import InferType
 from ..function import Function
 from ..expr_functor import ExprVisitor, ExprMutator
 from ..scope_builder import ScopeBuilder
 from .. import op
-from ... import DataType, register_func
 from .. import ty, expr
 from ..backend import compile_engine
 from ..op.memory import flatten_tuple_type, from_tuple_type, to_tuple_type
diff --git a/python/tvm/relay/transform/transform.py b/python/tvm/relay/transform/transform.py
index 4907a0b..3b01182 100644
--- a/python/tvm/relay/transform/transform.py
+++ b/python/tvm/relay/transform/transform.py
@@ -23,11 +23,12 @@ import inspect
 import functools
 import warnings
 
+from ...ir import transform as tvm_transform
 import tvm.ir
 from tvm import te
 from tvm.runtime import ndarray as _nd
 
-from tvm import relay
+# from tvm import relay
 from . import _ffi_api
 
 
@@ -82,7 +83,7 @@ def build_config(opt_level=2, required_pass=None, disabled_pass=None, trace=None
 
 
 @tvm._ffi.register_object("relay.FunctionPass")
-class FunctionPass(tvm.ir.transform.Pass):
+class FunctionPass():
     """A pass that works on each tvm.relay.Function in a module. A function
     pass class should be created through `function_pass`.
     """
diff --git a/python/tvm/topi/cuda/__init__.py b/python/tvm/topi/cuda/__init__.py
index 3ff544f..47badb5 100644
--- a/python/tvm/topi/cuda/__init__.py
+++ b/python/tvm/topi/cuda/__init__.py
@@ -17,8 +17,6 @@
 
 # pylint: disable=redefined-builtin, wildcard-import
 """CUDA specific declaration and schedules."""
-from __future__ import absolute_import as _abs
-
 from .conv1d import *
 from .conv1d_transpose_ncw import *
 from .conv2d import *
diff --git a/python/tvm/topi/cuda/sparse.py b/python/tvm/topi/cuda/sparse.py
index ebac551..50f6ae8 100644
--- a/python/tvm/topi/cuda/sparse.py
+++ b/python/tvm/topi/cuda/sparse.py
@@ -17,7 +17,6 @@
 
 """Sparse operators"""
 import numpy as np
-import scipy.sparse as sp
 
 import tvm
 from tvm import relay, te
@@ -326,6 +325,7 @@ def schedule_sparse_dense_padded(outs):
 
 def pad_sparse_matrix(matrix, blocksize):
     """Pad rows of sparse matrix matrix so that they are a multiple of blocksize."""
+    import scipy.sparse as sp
     assert isinstance(matrix, sp.bsr_matrix)
     new_entries = np.zeros(matrix.shape[0], dtype=matrix.indptr.dtype)
     bsr = matrix.blocksize[0]
@@ -362,6 +362,7 @@ def _alter_sparse_dense_layout(_attrs, inputs, _tinfos, _out_type):
     sparse_dense implementation for one that operates on a padded matrix. We
     also padd the matrix.
     """
+    import scipy.sparse as sp
     if (
         isinstance(inputs[1], relay.Constant)
         and isinstance(inputs[2], relay.Constant)
diff --git a/rust/tvm-rt/src/map.rs b/rust/tvm-rt/src/map.rs
index b8bfb4e..5df9040 100644
--- a/rust/tvm-rt/src/map.rs
+++ b/rust/tvm-rt/src/map.rs
@@ -107,6 +107,18 @@ where
         let oref: ObjectRef = map_get_item(self.object.clone(), key.upcast())?;
         oref.downcast()
     }
+
+    pub fn empty() -> Self {
+        Self::from_iter(vec![].into_iter())
+    }
+
+    //(@jroesch): I don't think this is a correct implementation.
+    pub fn null() -> Self {
+        Map {
+            object: ObjectRef::null(),
+            _data: PhantomData,
+        }
+    }
 }
 
 pub struct IntoIter<K, V> {
diff --git a/rust/tvm-rt/src/module.rs b/rust/tvm-rt/src/module.rs
index c0822a5..18347da 100644
--- a/rust/tvm-rt/src/module.rs
+++ b/rust/tvm-rt/src/module.rs
@@ -30,6 +30,8 @@ use tvm_sys::ffi;
 
 use crate::errors::Error;
 use crate::{errors, function::Function};
+use crate::{String as TString};
+use crate::RetValue;
 
 const ENTRY_FUNC: &str = "__tvm_main__";
 
@@ -49,6 +51,9 @@ crate::external! {
 
     #[name("runtime.ModuleLoadFromFile")]
     fn load_from_file(file_name: CString, format: CString) -> Module;
+
+    #[name("runtime.ModuleSaveToFile")]
+    fn save_to_file(module: ffi::TVMModuleHandle, name: TString, fmt: TString);
 }
 
 impl Module {
@@ -110,6 +115,10 @@ impl Module {
         Ok(module)
     }
 
+    pub fn save_to_file(&self, name: String, fmt: String) -> Result<(), Error> {
+        save_to_file(self.handle(), name.into(), fmt.into())
+    }
+
     /// Checks if a target device is enabled for a module.
     pub fn enabled(&self, target: &str) -> bool {
         let target = CString::new(target).unwrap();
@@ -128,3 +137,10 @@ impl Drop for Module {
         check_call!(ffi::TVMModFree(self.handle));
     }
 }
+
+// impl std::convert::TryFrom<RetValue> for Module {
+//     type Error = Error;
+//     fn try_from(ret_value: RetValue) -> Result<Module, Self::Error> {
+//         Ok(Module::new(ret_value.try_into()?))
+//     }
+// }
diff --git a/rust/tvm-rt/src/to_function.rs b/rust/tvm-rt/src/to_function.rs
index affd81b..c5ede7d 100644
--- a/rust/tvm-rt/src/to_function.rs
+++ b/rust/tvm-rt/src/to_function.rs
@@ -255,6 +255,7 @@ impl_typed_and_to_function!(2; A, B);
 impl_typed_and_to_function!(3; A, B, C);
 impl_typed_and_to_function!(4; A, B, C, D);
 impl_typed_and_to_function!(5; A, B, C, D, E);
+impl_typed_and_to_function!(6; A, B, C, D, E, G);
 
 #[cfg(test)]
 mod tests {
diff --git a/rust/tvm/Cargo.toml b/rust/tvm/Cargo.toml
index 153a195..c1d8aa8 100644
--- a/rust/tvm/Cargo.toml
+++ b/rust/tvm/Cargo.toml
@@ -50,7 +50,7 @@ tvm-macros = { version = "*", path = "../tvm-macros/" }
 paste = "0.1"
 mashup = "0.1"
 once_cell = "^1.3.1"
-pyo3 = { version = "0.11.1", optional = true }
+pyo3 = { version = "^0.12", optional = true }
 codespan-reporting = "0.9.5"
 structopt = { version = "0.3" }
 
diff --git a/rust/tvm/src/python.rs b/rust/tvm/src/python.rs
index 89558af..50ce7b0 100644
--- a/rust/tvm/src/python.rs
+++ b/rust/tvm/src/python.rs
@@ -18,6 +18,7 @@
  */
 
 use pyo3::prelude::*;
+use once_cell::sync::OnceCell;
 
 /// Load the Python interpreter into the address space.
 ///
@@ -29,6 +30,8 @@ use pyo3::prelude::*;
 pub fn load() -> Result<String, ()> {
     let gil = Python::acquire_gil();
     let py = gil.python();
+    // let main_mod = initialize();
+    //let main_mod = main_mod.as_ref(py);
     load_python_tvm_(py).map_err(|e| {
         // We can't display Python exceptions via std::fmt::Display,
         // so print the error here manually.
@@ -36,12 +39,22 @@ pub fn load() -> Result<String, ()> {
     })
 }
 
-// const TVMC_CODE: &'static str = include_str!("tvmc.py");
+pub fn import(mod_to_import: &str) -> PyResult<()> {
+    let gil = Python::acquire_gil();
+    let py = gil.python();
+    import_python(py, mod_to_import)?;
+    Ok(())
+}
+
+fn import_python<'p, 'b: 'p>(py: Python<'p>, to_import: &'b str) -> PyResult<&'p PyModule> {
+    let imported_mod = py.import(to_import)?;
+    Ok(imported_mod)
+}
+
 
 fn load_python_tvm_(py: Python) -> PyResult<String> {
-    let sys = py.import("tvm")?;
-    let version: String = sys.get("__version__")?.extract()?;
-    // py.run(TVMC_CODE, None, None)?;
+    let imported_mod = import_python(py, "tvm")?;
+    let version: String = imported_mod.get("__version__")?.extract()?;
     Ok(version)
 }
 
diff --git a/src/runtime/module.cc b/src/runtime/module.cc
index ac2b60f..af5feab 100644
--- a/src/runtime/module.cc
+++ b/src/runtime/module.cc
@@ -175,7 +175,7 @@ TVM_REGISTER_GLOBAL("runtime.ModuleGetTypeKey").set_body_typed([](Module mod) {
 TVM_REGISTER_GLOBAL("runtime.ModuleLoadFromFile").set_body_typed(Module::LoadFromFile);
 
 TVM_REGISTER_GLOBAL("runtime.ModuleSaveToFile")
-    .set_body_typed([](Module mod, std::string name, std::string fmt) {
+    .set_body_typed([](Module mod, tvm::String name, tvm::String fmt) {
       mod->SaveToFile(name, fmt);
     });
 


[incubator-tvm] 02/02: SciPy causes crashes

Posted by jr...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch cargo-build
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git

commit 1604de7f94524c2a79df6aaf50c578a010be9681
Author: Jared Roesch <jr...@octoml.ai>
AuthorDate: Thu Nov 5 18:52:41 2020 -0800

    SciPy causes crashes
---
 python/tvm/relay/frontend/__init__.py                  | 3 ---
 python/tvm/relay/frontend/tensorflow.py                | 6 ++++--
 python/tvm/topi/testing/conv1d_transpose_ncw_python.py | 3 +--
 python/tvm/topi/testing/conv2d_hwcn_python.py          | 2 +-
 tests/python/topi/python/test_topi_sparse.py           | 2 +-
 5 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/python/tvm/relay/frontend/__init__.py b/python/tvm/relay/frontend/__init__.py
index 7e16499..aa8ac4f 100644
--- a/python/tvm/relay/frontend/__init__.py
+++ b/python/tvm/relay/frontend/__init__.py
@@ -20,9 +20,6 @@ Frontends for constructing Relay programs.
 Contains the model importers currently defined
 for Relay.
 """
-
-from __future__ import absolute_import
-
 from .mxnet import from_mxnet
 from .mxnet_qnn_op_utils import quantize_conv_bias_mkldnn_from_var
 from .keras import from_keras
diff --git a/python/tvm/relay/frontend/tensorflow.py b/python/tvm/relay/frontend/tensorflow.py
index c6079b4..0283146 100644
--- a/python/tvm/relay/frontend/tensorflow.py
+++ b/python/tvm/relay/frontend/tensorflow.py
@@ -904,10 +904,12 @@ def _batch_matmul():
 
 
 def _sparse_tensor_dense_matmul():
-    # Sparse utility from scipy
-    from scipy.sparse import csr_matrix
 
     def _impl(inputs, attr, params, mod):
+        # Loading this by default causes TVM to not be loadable from other languages.
+        # Sparse utility from scipy
+        from scipy.sparse import csr_matrix
+
         assert len(inputs) == 4, "There should be 4 input tensors"
 
         indices_tensor = _infer_value(inputs[0], params, mod).asnumpy()
diff --git a/python/tvm/topi/testing/conv1d_transpose_ncw_python.py b/python/tvm/topi/testing/conv1d_transpose_ncw_python.py
index 85e1410..642908a 100644
--- a/python/tvm/topi/testing/conv1d_transpose_ncw_python.py
+++ b/python/tvm/topi/testing/conv1d_transpose_ncw_python.py
@@ -17,11 +17,9 @@
 # pylint: disable=unused-variable
 """Transposed 1D convolution in python"""
 import numpy as np
-import scipy
 import tvm.topi.testing
 from tvm.topi.nn.utils import get_pad_tuple1d
 
-
 def conv1d_transpose_ncw_python(a_np, w_np, stride, padding, output_padding):
     """Transposed 1D convolution operator in NCW layout.
 
@@ -51,6 +49,7 @@ def conv1d_transpose_ncw_python(a_np, w_np, stride, padding, output_padding):
         3-D with shape [batch, out_channel, out_width]
 
     """
+    import scipy
     batch, in_c, in_w = a_np.shape
     _, out_c, filter_w = w_np.shape
     opad = output_padding[0]
diff --git a/python/tvm/topi/testing/conv2d_hwcn_python.py b/python/tvm/topi/testing/conv2d_hwcn_python.py
index 9ee66df..bcfd921 100644
--- a/python/tvm/topi/testing/conv2d_hwcn_python.py
+++ b/python/tvm/topi/testing/conv2d_hwcn_python.py
@@ -17,7 +17,6 @@
 # pylint: disable=invalid-name, line-too-long, unused-variable, too-many-locals
 """Convolution in python"""
 import numpy as np
-import scipy.signal
 from tvm.topi.nn.utils import get_pad_tuple
 
 
@@ -45,6 +44,7 @@ def conv2d_hwcn_python(a_np, w_np, stride, padding):
     b_np : np.ndarray
         4-D with shape [out_height, out_width, out_channel, batch]
     """
+    import scipy.signal
     in_height, in_width, in_channel, batch = a_np.shape
     kernel_h, kernel_w, _, num_filter = w_np.shape
     if isinstance(stride, int):
diff --git a/tests/python/topi/python/test_topi_sparse.py b/tests/python/topi/python/test_topi_sparse.py
index 62f49e2..fb5faf9 100644
--- a/tests/python/topi/python/test_topi_sparse.py
+++ b/tests/python/topi/python/test_topi_sparse.py
@@ -25,7 +25,6 @@ from tvm.topi.utils import get_const_tuple
 import tvm.contrib.sparse as tvmsp
 from collections import namedtuple
 import time
-import scipy.sparse as sp
 import tvm.testing
 
 _sparse_dense_implement = {
@@ -248,6 +247,7 @@ def test_dense():
 
 
 def test_sparse_dense_csr():
+    import scipy.sparse as sp
     M, N, K, density = 1, 17, 47, 0.2
     X_np = np.random.randn(M, K).astype("float32")
     W_sp_np = sp.random(N, K, density=density, format="csr", dtype="float32")