You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by Puddingfjz via TVM Discuss <no...@discuss.tvm.ai> on 2020/04/12 18:55:50 UTC

[TVM Discuss] [Questions] Explain the tiling reduction axes part in template


Can anyone help explain the tiling reduction axes part in the [Tuning High Performance Convolution on NVIDIA GPUs](https://docs.tvm.ai/tutorials/autotvm/tune_conv2d_cuda.html#tuning-high-performance-convolution-on-nvidia-gpus)?
The code of this part is:
```
# tile reduction axes
    n, f, y, x = s[OL].op.axis
    rc, ry, rx = s[OL].op.reduce_axis
    rco, rcm, rci = cfg['tile_rc'].apply(s, OL, rc)
    ryo, rym, ryi = cfg['tile_rx'].apply(s, OL, ry)
    rxo, rxm, rxi = cfg['tile_ry'].apply(s, OL, rx)
    s[OL].reorder(rco, ryo, rxo, rcm, rym, rxm, rci, ryi, rxi, n, f, y, x)

    s[AA].compute_at(s[OL], rxo)
    s[WW].compute_at(s[OL], rxo)
    s[AL].compute_at(s[OL], rxm)
    s[WL].compute_at(s[OL], rxm)
```

1. Why we need to tile the reduction axes (what are reduction axes)?

2. Are ```n, f, y, x``` here the same as the ```n, f, y, x``` in
```
##### space definition begin #####
    n, f, y, x = s[conv].op.axis
    rc, ry, rx = s[conv].op.reduce_axis
```
or what is the relationship between them?
 
3. Why do we need the ```compute_at``` here? I think ```AA``` is read from global memory, so they do not need to be computed, right?





---
[Visit Topic](https://discuss.tvm.ai/t/explain-the-tiling-reduction-axes-part-in-template/6336/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/2902749b7f039cc6cfaebd81743638d6f5c56f174167c386aaa492449b8ce5c7).