You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tvm.apache.org by Sasha Rush via Apache TVM Discuss <no...@discuss.tvm.ai> on 2021/02/10 23:37:59 UTC

[Apache TVM Discuss] [Development/RFC] Thoughts on a Simpler Scheduling Language


Hi all, 

I find programming in TVM to result in an extremely large number of non-scoped variables. The main problem is that the axes and tensors are not grouped, and simple mistakes result in extremely verbose low-level errors. 90% of my mistakes are just from not keeping tensors and axes grouped together. 

I'm curious what people think of an less low-level scheduling language. I generally write my code in this style, which is much less verbose, fixes double splitting, and prevents errors from mixing up which axis belongs to which tensor.

```python                                                                                      
    ll, nn = s.axes(C)                                                                                                         
    reduce_axis = s.reduce_axis(C)                                                                                             
    ll = ll.split(TPB)                                                                                                         
    nn = nn.split(TPB)                                                                                                         
    mm = reduce_axis.split(TPB)                                                                                                
    s.reorder(C, (ll.outer, nn.outer, ll.inner, nn.inner, mm.outer, mm.inner))

    # Bind blocks and threads to C                                                                                             
    ll.outer.bind(te.thread_axis("blockIdx.x"))                                                                                
    nn.outer.bind(te.thread_axis("blockIdx.y"))                                                                                
    ll.inner.bind(tx)                                                                                                          
    nn.inner.bind(ty)                                                                                                          
                                                                                                                               
    # Set up Caching                                                                                                           
    ll_A, mm_A = s.axes(AA)                                                                                                    
    ll_A = ll_A.split(TPB)                                                                                                       
    mm_A = mm_A.split(TPB)                                                                                                       
    s.reorder(AA, (ll_A.outer, mm_A.outer, ll_A.inner, mm_A.inner))                                                            
    mm.outer.compute_at(AA)                                                                                                    
    ll_A.inner.bind(tx)                                                                                                        
    mm_A.inner.bind(ty)
```

Do people have any other tricks? Ideally there would be a really nice way to group together spliting of two tensors in the same way (in this case ll_A mirrrors ll, why are they seperate?)





---
[Visit Topic](https://discuss.tvm.apache.org/t/thoughts-on-a-simpler-scheduling-language/9110/1) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/235a0b7fd9bbdb03428d737ba7c75bcbe13467ae222e99ea13e259cec7c9efa1).

[Apache TVM Discuss] [Development/RFC] Thoughts on a Simpler Scheduling Language

Posted by Sasha Rush via Apache TVM Discuss <no...@discuss.tvm.ai>.

What an amazing answer! Thank you so much for your time and thoughtfulness.





---
[Visit Topic](https://discuss.tvm.apache.org/t/thoughts-on-a-simpler-scheduling-language/9110/3) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/8ce4eb1ec52efa5c6c9e641933701655d44a2f015196894c6f1df71cffc528cf).

[Apache TVM Discuss] [Development/RFC] Thoughts on a Simpler Scheduling Language

Posted by Junru Shao via Apache TVM Discuss <no...@discuss.tvm.ai>.

Hey @srush,

Thanks for asking!

We are actively developing a more straightforward scheduling language and a new IR called TensorIR:

* imperative scheduling: each schedule primitive is like applying a compiler pass that transforms the TensorIR to a new TensorIR - you can see and debug the scheduling result immediately after each step.

* python-first: each step results in a new TensorIR, which can be printed into python syntax, as well as be parsed back to schedule status. The TensorIR syntax is designed to be completely human readable and manipulatable.

* Tensorization: we extend our tensorization capability and allows possibility for competitive GEMM perf.

* In your particular case, we provide a primitive called `reverse_compute_at`, which computes the consumer under the specific loop of the producer. The shape of the computed region is handled automatically in our schedule - so you don’t have to repetitive splitting

RFC: https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872.

We are actively preparing upstreaming our codebase, and will closely update with the community with our latest status :slight_smile:





---
[Visit Topic](https://discuss.tvm.apache.org/t/thoughts-on-a-simpler-scheduling-language/9110/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/b94b432f13100b4bf0ac835bbb55485966429865094fe7caac8a85f093bb16ad).