You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by Wu Zheng via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/08/16 03:46:49 UTC

[Apache TVM Discuss] [Questions] I want to pad my matrix but tvm wipe the paddings when lowering


I just write a matrix multiplication computation, and here is my code, it is easy

    A = te.placeholder((49, 576))
    B = te.placeholder((64, 576))
    D = topi.nn.pad(A, (0, 0), (15, 0))
    k = te.reduce_axis((0, 576))
    C = te.compute(
        (64, 64),
        lambda i, j: te.sum(
            D(i, k) * B(j, k),
            axis=[k]
        )
    )
    E = topi.nn.pad(C, (0, 0), (-15, 0))

    s = te.create_schedule(E.op)
    code = tvm.lower(s, [A, B, E], simple_mode=True)
    print(code)

I want to pad D to (64, 576) then I can do tensorize of (16, 16) in the matrix multiplication. But I got the tir like this:

    primfn(placeholder_2: handle, placeholder_3: handle, PadInput_1: handle) -> ()
      attr = {"global_symbol": "main", "tir.noalias": True}
      buffers = {PadInput: Buffer(PadInput_2: Pointer(float32), float32, [49, 64], []),
                 placeholder: Buffer(placeholder_4: Pointer(float32), float32, [49, 576], []),
                 placeholder_1: Buffer(placeholder_5: Pointer(float32), float32, [64, 576], [])}
      buffer_map = {placeholder_2: placeholder, placeholder_3: placeholder_1, PadInput_1: PadInput} {
      attr [PadInput_3: Pointer(float32)] "storage_scope" = "global";
      allocate(PadInput_3, float32, [28224]);
      attr [compute: Pointer(float32)] "storage_scope" = "global";
      allocate(compute, float32, [3136]) {
        for (i0: int32, 0, 49) {
          for (i1: int32, 0, 576) {
            PadInput_3[((i0*576) + i1)] = (float32*)placeholder_4[((i0*576) + i1)]
          }
        }
        for (i: int32, 0, 49) {
          for (j: int32, 0, 64) {
            compute[((i*64) + j)] = 0f32
            for (rv: int32, 0, 576) {
              compute[((i*64) + j)] = ((float32*)compute[((i*64) + j)] + ((float32*)PadInput_3[((i*576) + rv)]*(float32*)placeholder_5[((j*576) + rv)]))
            }
          }
        }
        for (i0_1: int32, 0, 49) {
          for (i1_1: int32, 0, 64) {
            PadInput_2[((i0_1*64) + i1_1)] = (float32*)compute[((i0_1*64) + i1_1)]
          }
        }
      }
    }

It seems like the padding I did was eliminated because tvm might think it is useless. But it is important in my scheduling. So is there anybody can tell me, is there an option to shut down the elimination? Thanks a lot!





---
[Visit Topic](https://discuss.tvm.apache.org/t/i-want-to-pad-my-matrix-but-tvm-wipe-the-paddings-when-lowering/10799/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/df9abb3e222cbb876049ddae9caa7eb6591466b78a21752ce299587d8d56a7fb).