You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tvm.apache.org by Yizhi Liu via TVM Discuss <no...@discuss.tvm.ai> on 2020/05/18 23:58:14 UTC

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6


This is a proposal to do a minor (bugfix) release of v0.6, aka v0.6.1. Commits will be cherry-picked to v0.6.1 branch. We follow the standard [Apache release process](https://tvm.apache.org/docs/contribute/release_process.html)

I will go through the commits history to get a list of bug fixing for review. People are also welcomed to propose anything you want to be included in the release.





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/4c6a5aa9610294de9bc04d8b6c4cc66fe3a6431cf58b83c3c8cf9735aec118fc).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by Tom Gall via TVM Discuss <no...@discuss.tvm.ai>.

As more a policy kind of comment, for a fix release and the potential for more, as a community, "we" might want to set some expectations for how long something might be maintained / update frequency etc kind of akin to the linux kernel community. (ex: https://www.kernel.org/category/releases.html)





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/5) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/577e5f4b911cdfbd01f503bf7e0d582b274e261c39600098dbc7580096ebaf53).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by tqchen via TVM Discuss <no...@discuss.tvm.ai>.

We could, but given that it is not strictly a bug, we can also choose not to





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/4) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/4e9587ae8b13d90e47e39ef65477ed82d1d242bfd9fda664861e39814354a616).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by Junru Shao via TVM Discuss <no...@discuss.tvm.ai>.

Shall we also backport the fixes in the latest dmlc-core to help reduce stack usage?





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/3) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/f18bf148c1c0bb898f3cb4e6a7ac71389414866f186b9fedd6d8adbd54bd017f).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by Yizhi Liu via TVM Discuss <no...@discuss.tvm.ai>.

Thanks for pointing out. I'll remove accordingly.





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/8) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/291a357d44506b03bd4c1c146b3c725cbd10f8fbaa307d6d598a34856177d432).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by masahi via TVM Discuss <no...@discuss.tvm.ai>.

@yzhliu  I see there are some PyTorch related fix, but the PyTorch frontend was not part of v0.6 release.





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/7) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/6b3cab68e57d248f3b8817acc1103a99c6ec8d132a9b981b635f19fa2ff39a98).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by Yizhi Liu via TVM Discuss <no...@discuss.tvm.ai>.

Here is a list of bug fixes we're going to apply on v0.6 branch, please let me know if I missed anything.

* [RELAY] bugfix. #2215
* [Graph Tuner] Fix benchmark layout in graph tuner #3926
* [VTA] Parameterization and bug fix in TensorLoad module #3841
* [VTA] Fix TSIM compile error in Linux (add missing -fPIC flag) #3876
* Fix Windows build #3429
* [GOLANG] Some fixes for golang latest version compiler. #3119 #3182
* [FRONTEND][TENSORFLOW] bug fix for tensorflow official slim models. #2864
* [VTA] Bug fix for padded load with large inputs #4293
* [VTA] Hotfix for padded load test in Chisel VTA #4264
* [BUGFIX] Fix search path for libtvm_topi.so #4467
* [Quantization] Fix annotation for multiply op #4458
* Fix MSVC build error with container.h #4455
* [Relay][Pass] Fix lambda lift pass for recursive call #4432
* [Chisel][VTA] Fix multiple transfer issue in LoadUop module #4442
* [Relay][Fix] Fix alter op layout when calling a global var #4454
* [RUNTIME] Fix compile errors of OpenCL FPGA backend #4492
* Some Windows and MSVC fixes #4569
* [VTA] Fixed a crash issue in TSIM driver #4527
* Fix bias_add gradient #4516
* [NODE][Serialization]fix serialization precision loss in float #4503
* Fix Base64OutStream portability issue #4668
* fix topi.nn.global_pool layout="NHWC" #4656
* [Bugfix] fskip of EliminateCommonSubexpr cannot always return false #4620
* [fix] Disable copy constructor for external codegen #4597
* [Runtime] Fix NDArray SaveDLTensor declaration and implementation signature different #4586
* [FFI][Windows] Fix hasattr by extracting Python error type from Windows error message #4780
* Make sure to visit the arguments of inlined functions #4783
* [AUTOTVM] Fix a bug in generating the search space #4779
* Fix Tensorflow conv3d pad bug, add non-cubic data and kernel tests #4772
* Fix onnx import bugs #4750
* [Fix] Fix dense x86 schedule #4728
* [Relay][Frontend][TF] fix _parse_param bug #4711
* [Fix] Fix RemoveUnusedFunctions pass #4700
* [VTA] Fix an issue in updating uop_idx in the TensorGemm module #4694
* Fix Python syntax error in start_rpc_server_to_tracker.py #4682
* [WIP] Fixing an Infinite Loop case in UnmatchedChecker. #4881
* [CodeGen][CUDA] Fix issues in cuda codegen #4876
* [Bugfix] Fixed crash caused by reversing bitwise operations #4852
* Fixed process termination routine in windows #4844
* [Realy][fix] Fix alpha_equal bug for attribute check #4897
* [RELAY][FRONTEND][TENSORFLOW] Fix FuseBatchNorm output cast error if need_cast is True #4894
* fix ROCm strategy for winograd conv selection #5001
* [Torch] fix unordered dictionary problem for python version under 3.6 #4982
* [Runtime] Fix TVM_DLL_EXPORT_TYPED_FUNC to work on Windows #4955
* [Fix] Fix CompilerAttrs #5109
* [Relay][VM] Fix compilation of If-Elses #5040
* [External Codegen] Fix annotate pass static variable #5023
* [Runtime] Export GraphRuntime in tvm_runtime.dll #5002
* rocm: fix miopen convolutions #5179
* Fix for issue #4831. The data_min_idx and data_max_idx were flipped. #5136
* [Torch] Fix conv2d conversion for group conv (group > 1 but != in channels) #5132
* [Fix][VM] Fix copy constructor #5237
* [RELAY][FIX] Fix hang in MergeCompilerRegions #5227
* [CodeGen][CUDA] Fix bugs #5209
* Fix intel conv2d auto tune #5200
* [BUGFIX] Fix CRT static test bug #5293
* [VTA] Fix VTA compile issue #5481
* [Fix] Add ConstantNode to IsAtomic #5457
* [Fontend][Pytorch] Fix translation of transpose when axis argument is as a list #5451
* [CODEGEN][CUDA] Fix a bug when vectorized load&store was involved for… #5428
* [Fix] Fix conv2d alter op for arm cpu #5532
* Fix an issue with ONNX Upsample #5530
* LRN only supports 4D tensors, remove it from alter_op_layout #5520
* [BUGFIX][BACKPORT-0.6][ARITH] Fix FloorMod Simplifier #5509
* [Autotvm]Fix the runtime raise error #5586
* [PYTORCH]expand bug fix #5576
* fix small bug about dense_grad #5695
* [Bugfix] Fix Python debugger segfaults with TVM built with LLVM #5685
* Fix gelu in PyTorch frontend, tighten numerical checks #5763
* Fix extent one for the post_stmt in loop partition #3734 
* Fix broken loop partitioning due to recent changes. #4243
* [TOPI] fix inconsistent tag name #4134 
* [CUDA] Fix fp16 intrin, disable bad fp16 vecadd test for now #4239





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/6) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/f446a658e16561bb58c6b996d4d70529538e830bb6683c5c368f2d42b55c2765).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by Yizhi Liu via TVM Discuss <no...@discuss.tvm.ai>.

Here's a list of fixes we applied to v0.6 branch. I will cut a tag this Friday.

* Fixed process termination routine in windows #4844
* [Runtime] Fix NDArray SaveDLTensor declaration and implementation signature different #4586
* [NODE][Serialization]fix serialization precision loss in float #4503
* [Relay][Frontend][TF] fix _parse_param bug #4711
* Fix bias_add gradient #4516
* Make sure to visit the arguments of inlined functions #4783
* Fix Python syntax error in start_rpc_server_to_tracker.py #4682
* [Bugfix] Fixed crash caused by reversing bitwise operations #4852
* [Fix][VM] Fix copy constructor #5237
* fix small bug about dense_grad #5695
* [Fix] Fix conv2d alter op for arm cpu #5532
* [Fix] Fix dense x86 schedule #4728
* [Relay][Fix] Fix alter op layout when calling a global var #4454
* [Relay][Pass] Fix lambda lift pass for recursive call #4432
* [BUGFIX] Fix search path for libtvm_topi.so #4467
* [Bugfix] Fix Python debugger segfaults with TVM built with LLVM #5685
* [RUNTIME] Fix compile errors of OpenCL FPGA backend #4492
* [BUGFIX][BACKPORT-0.6][ARITH] Fix FloorMod Simplifier #5509
* Some Windows and MSVC fixes #4569
* [Chisel][VTA] Fix multiple transfer issue in LoadUop module #4442
* [VTA] Fix an issue in updating uop_idx in the TensorGemm module #4694
* [VTA] Fixed a crash issue in TSIM driver #4527
* [VTA] Enable streamlined GEMM execution #4392
* [VTA][Chisel] End-to-end Inference with Chisel VTA #4574
* Added declare of aluBits for TensorAlu #4624
* [Quantization] Fix annotation for multiply op #4458
* LRN only supports 4D tensors, remove it from alter_op_layout #5520
* fix topi.nn.global_pool layout=“NHWC” #4656
* [FFI][Windows] Fix hasattr by extracting Python error type from Windows error message #4780
* [Runtime] Export GraphRuntime in tvm_runtime.dll #5002
* Fix Base64OutStream portability issue #4668
* [AUTOTVM] Fix a bug in generating the search space #4779
* [Relay][VM] Fix compilation of If-Elses #5040
* [RELAY][FRONTEND][TENSORFLOW] Fix FuseBatchNorm output cast error if need_cast is True #4894
* [Bugfix] fskip of EliminateCommonSubexpr cannot always return false #4620
* [Fix] Add ConstantNode to IsAtomic #5457
* [Fix] Fix RemoveUnusedFunctions pass #4700
* [Realy][fix] Fix alpha_equal bug for attribute check #4897
* [BACKPORT-0.6][Bugfix][Arith] keep div_mode during floordiv simplify #5922
* [ARITH][BACKPORT-0.6] fix a min/max simplify bug #5761
* [0.6-BACKPORT] Improve robustness of the docs build #5583





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/9) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/818c929ff558bfe03f26ebb01371b633cc398a26e63e090cfa1eaee14f5b196b).

[TVM Discuss] [Development/RFC] [RFC] Minor (bugfix) Release for v0.6

Posted by tqchen via TVM Discuss <no...@discuss.tvm.ai>.

Thanks Yizhi, i think it is a great idea. We have already backported a few patches





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/f1ccb9f5bed7a912320a1828ba9259f15c28a3972ab944bd58e0b9557d1fcaeb).