You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by Ligeng Zhu via Apache TVM Discuss <no...@discuss.tvm.ai> on 2022/02/15 21:33:38 UTC

[Apache TVM Discuss] [Questions] Relay.transform to remove comments?


Hi there,

When following the [quantization tutorial](https://tvm.apache.org/docs/how_to/deploy_models/deploy_prequantized.html), the generated tvmir includes a lot of comment information like

```
fn (%input: Tensor[(1, 3, 224, 224), float32], %features.0.0_weight: Tensor[(32, 3, 3, 3), float32], %features.0.0_bias: Tensor[(32), float32], %features.1.conv.0.0_weight: Tensor[(32, 1, 3, 3), float32], %features.1.conv.0.0_bias: Tensor[(32), float32], %features.1.conv.1_weight: Tensor[(16, 32, 1, 1), float32], %features.1.conv.1_bias: Tensor[(16), float32], %features.2.conv.0.0_weight: Tensor[(96, 16, 1, 1), float32], %features.2.conv.0.0_bias: Tensor[(96), float32], %features.2.conv.1.0_weight: Tensor[(96, 1, 3, 3), float32], %features.2.conv.1.0_bias: Tensor[(96), float32], %features.2.conv.2_weight: Tensor[(24, 96, 1, 1), float32], %features.2.conv.2_bias: Tensor[(24), float32], %features.3.conv.0.0_weight: Tensor[(144, 24, 1, 1), float32], %features.3.conv.0.0_bias: Tensor[(144), float32], %features.3.conv.1.0_weight: Tensor[(144, 1, 3, 3), float32], %features.3.conv.1.0_bias: Tensor[(144), float32], %features.3.conv.2_weight: Tensor[(24, 144, 1, 1), float32], %features.3.conv.2_bias: Tensor[(24), float32], %features.4.conv.0.0_weight: Tensor[(144, 24, 1, 1), float32], %features.4.conv.0.0_bias: Tensor[(144), float32], %features.4.conv.1.0_weight: Tensor[(144, 1, 3, 3), float32], %features.4.conv.1.0_bias: Tensor[(144), float32], %features.4.conv.2_weight: Tensor[(32, 144, 1, 1), float32], %features.4.conv.2_bias: Tensor[(32), float32], %features.5.conv.0.0_weight: Tensor[(192, 32, 1, 1), float32], %features.5.conv.0.0_bias: Tensor[(192), float32], %features.5.conv.1.0_weight: Tensor[(192, 1, 3, 3), float32], %features.5.conv.1.0_bias: Tensor[(192), float32], %features.5.conv.2_weight: Tensor[(32, 192, 1, 1), float32], %features.5.conv.2_bias: Tensor[(32), float32], %features.6.conv.0.0_weight: Tensor[(192, 32, 1, 1), float32], %features.6.conv.0.0_bias: Tensor[(192), float32], %features.6.conv.1.0_weight: Tensor[(192, 1, 3, 3), float32], %features.6.conv.1.0_bias: Tensor[(192), float32], %features.6.conv.2_weight: Tensor[(32, 192, 1, 1), float32], %features.6.conv.2_bias: Tensor[(32), float32], %features.7.conv.0.0_weight: Tensor[(192, 32, 1, 1), float32], %features.7.conv.0.0_bias: Tensor[(192), float32], %features.7.conv.1.0_weight: Tensor[(192, 1, 3, 3), float32], %features.7.conv.1.0_bias: Tensor[(192), float32], %features.7.conv.2_weight: Tensor[(64, 192, 1, 1), float32], %features.7.conv.2_bias: Tensor[(64), float32], %features.8.conv.0.0_weight: Tensor[(384, 64, 1, 1), float32], %features.8.conv.0.0_bias: Tensor[(384), float32], %features.8.conv.1.0_weight: Tensor[(384, 1, 3, 3), float32], %features.8.conv.1.0_bias: Tensor[(384), float32], %features.8.conv.2_weight: Tensor[(64, 384, 1, 1), float32], %features.8.conv.2_bias: Tensor[(64), float32], %features.9.conv.0.0_weight: Tensor[(384, 64, 1, 1), float32], %features.9.conv.0.0_bias: Tensor[(384), float32], %features.9.conv.1.0_weight: Tensor[(384, 1, 3, 3), float32], %features.9.conv.1.0_bias: Tensor[(384), float32], %features.9.conv.2_weight: Tensor[(64, 384, 1, 1), float32], %features.9.conv.2_bias: Tensor[(64), float32], %features.10.conv.0.0_weight: Tensor[(384, 64, 1, 1), float32], %features.10.conv.0.0_bias: Tensor[(384), float32], %features.10.conv.1.0_weight: Tensor[(384, 1, 3, 3), float32], %features.10.conv.1.0_bias: Tensor[(384), float32], %features.10.conv.2_weight: Tensor[(64, 384, 1, 1), float32], %features.10.conv.2_bias: Tensor[(64), float32], %features.11.conv.0.0_weight: Tensor[(384, 64, 1, 1), float32], %features.11.conv.0.0_bias: Tensor[(384), float32], %features.11.conv.1.0_weight: Tensor[(384, 1, 3, 3), float32], %features.11.conv.1.0_bias: Tensor[(384), float32], %features.11.conv.2_weight: Tensor[(96, 384, 1, 1), float32], %features.11.conv.2_bias: Tensor[(96), float32], %features.12.conv.0.0_weight: Tensor[(576, 96, 1, 1), float32], %features.12.conv.0.0_bias: Tensor[(576), float32], %features.12.conv.1.0_weight: Tensor[(576, 1, 3, 3), float32], %features.12.conv.1.0_bias: Tensor[(576), float32], %features.12.conv.2_weight: Tensor[(96, 576, 1, 1), float32], %features.12.conv.2_bias: Tensor[(96), float32], %features.13.conv.0.0_weight: Tensor[(576, 96, 1, 1), float32], %features.13.conv.0.0_bias: Tensor[(576), float32], %features.13.conv.1.0_weight: Tensor[(576, 1, 3, 3), float32], %features.13.conv.1.0_bias: Tensor[(576), float32], %features.13.conv.2_weight: Tensor[(96, 576, 1, 1), float32], %features.13.conv.2_bias: Tensor[(96), float32], %features.14.conv.0.0_weight: Tensor[(576, 96, 1, 1), float32], %features.14.conv.0.0_bias: Tensor[(576), float32], %features.14.conv.1.0_weight: Tensor[(576, 1, 3, 3), float32], %features.14.conv.1.0_bias: Tensor[(576), float32], %features.14.conv.2_weight: Tensor[(160, 576, 1, 1), float32], %features.14.conv.2_bias: Tensor[(160), float32], %features.15.conv.0.0_weight: Tensor[(960, 160, 1, 1), float32], %features.15.conv.0.0_bias: Tensor[(960), float32], %features.15.conv.1.0_weight: Tensor[(960, 1, 3, 3), float32], %features.15.conv.1.0_bias: Tensor[(960), float32], %features.15.conv.2_weight: Tensor[(160, 960, 1, 1), float32], %features.15.conv.2_bias: Tensor[(160), float32], %features.16.conv.0.0_weight: Tensor[(960, 160, 1, 1), float32], %features.16.conv.0.0_bias: Tensor[(960), float32], %features.16.conv.1.0_weight: Tensor[(960, 1, 3, 3), float32], %features.16.conv.1.0_bias: Tensor[(960), float32], %features.16.conv.2_weight: Tensor[(160, 960, 1, 1), float32], %features.16.conv.2_bias: Tensor[(160), float32], %features.17.conv.0.0_weight: Tensor[(960, 160, 1, 1), float32], %features.17.conv.0.0_bias: Tensor[(960), float32], %features.17.conv.1.0_weight: Tensor[(960, 1, 3, 3), float32], %features.17.conv.1.0_bias: Tensor[(960), float32], %features.17.conv.2_weight: Tensor[(320, 960, 1, 1), float32], %features.17.conv.2_bias: Tensor[(320), float32], %features.18.0_weight: Tensor[(1280, 320, 1, 1), float32], %features.18.0_bias: Tensor[(1280), float32], %classifier.1._packed_params_weight: Tensor[(1000, 1280), float32], %classifier.1._packed_params_bias: Tensor[(1000), float32]) {
  %0 = qnn.quantize(%input, 0.0359743f, 54, out_dtype="uint8", axis=1) /* C.graph: aten::quantize_per_tensor, jit._trace.TopLevelTracedModule: __module.quant */;
  %1 = nn.pad(%0, 54f, pad_width=[[0, 0], [0, 0], [1, 1], [1, 1]]) /* C.graph: quantized::conv2d_relu, jit._trace.TopLevelTracedModule: __module.features/__module.features.0/__module.features.0.0_PART_0 */;
  %2 = qnn.quantize(%features.0.0_weight, meta[relay.Constant][0], 0, out_dtype="int8", axis=0) /* C.graph: quantized::conv2d_relu, jit._trace.TopLevelTracedModule: __module.features/__module.features.0/__module.features.0.0_PART_1 */;
  %3 = qnn.conv2d(%1, %2, 54, 0, 0.0359743f, meta[relay.Constant][0], strides=[2, 2], padding=[0, 0, 0, 0], channels=32, kernel_size=[3, 3], out_dtype="int32") /* C.graph: quantized::conv2d_relu, jit._trace.TopLevelTracedModule: __module.features/__module.features.0/__module.features.0.0_PART_2 */;
  %4 = qnn.quantize(%features.0.0_bias, meta[relay.Constant][1], 0, out_dtype="int32", axis=0) /* C.graph: quantized::conv2d_relu, jit._trace.TopLevelTracedModule: __module.features/__module.features.0/__module.features.0.0_PART_3 */;
```

looks like it inherits a lot of information from the torch jit, which is relavant with tvm. Is there any quick method that I can remove all these comments?





---
[Visit Topic](https://discuss.tvm.apache.org/t/relay-transform-to-remove-comments/12096/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/2e10b6919220b415a375060c49989b4209fa90f6f92d2a4f13bf4e5173a8c6e4).