You are viewing a plain text version of this content. The canonical link for it is here.
Posted to discuss-archive@tvm.apache.org by kwmaeng via Apache TVM Discuss <no...@discuss.tvm.ai> on 2020/10/24 21:58:28 UTC

[Apache TVM Discuss] [Questions] Is sparse kernel supported in the tensor expression language?


I am wondering if there is a way for me to generate a sparse linear algebra code using TVM tensor expression language.
It seems like relay.nn has sparse-related APIs like sparse_dense, but tvm.te does not.
If I cannot generate sparse kernel using te, is there a way for me to use relay to manually generate scheduled sparse kernel (e.g., tiled)?

Thank you!





---
[Visit Topic](https://discuss.tvm.apache.org/t/is-sparse-kernel-supported-in-the-tensor-expression-language/8274/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/cb92e6f600f96ea544fd29a334d0425b4c6781b0b153c17e040fad3851fbf73d).

[Apache TVM Discuss] [Questions] Is sparse kernel supported in the tensor expression language?

Posted by Junru Shao via Apache TVM Discuss <no...@discuss.tvm.ai>.

CC: @tkonolige who is great at sparse kernels





---
[Visit Topic](https://discuss.tvm.apache.org/t/is-sparse-kernel-supported-in-the-tensor-expression-language/8274/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/e02ea3b22fb92733d3500a1a3fca84d0990e3255a2edff3e9844cba2b5388588).

[Apache TVM Discuss] [Questions] Is sparse kernel supported in the tensor expression language?

Posted by Tristan Konolige via Apache TVM Discuss <no...@discuss.tvm.ai>.

@kwmaeng I've written the sparse_dense kernel for GPUs. It was a little bit of an arduous process, but here are my takeaways:

- Using te only works for some sparse kernels. Sparse kernels are often written as functions over the input tensor. Unfortunately, te requires you to write your kernels as functions over indices in the output tensor. If you can restructure you kernel to be output driven, then you can get around this issue, but other wise your out of luck.
- Even If you can write your kernel in te, you may not be able to schedule it. Currently, you cannot apply any scheduling to tensors that are used to control the bounds of a loop. Also, if you have a statement of the form `x[y[i]]` (`x`, `y` are tensors, `i` is an index), you cannot apply any transformations to pull `y[i]` out of `x` (for caching typically).
- Given the above two points, you'll probably have to write your kernel in TIR. Currently the best way to do this is `IRBuilder` (tvmscript is still a work in progress and I didn't have as much luck using it). You can see the sparse_dense kernel I wrote here https://github.com/apache/incubator-tvm/blob/main/python/tvm/topi/cuda/sparse.py#L158 for an example of how to use `IRBuilder`.





---
[Visit Topic](https://discuss.tvm.apache.org/t/is-sparse-kernel-supported-in-the-tensor-expression-language/8274/3) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/66fb4a323e026f9c60c461c449a88b3d0af1df4089f8ea445c71365c4e6c2116).