You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2022/07/04 03:51:53 UTC

[GitHub] [tvm] yangulei commented on pull request #11966: [DNNL][BYOC] Enable Altering Dense Weight Layout

yangulei commented on PR #11966:
URL: https://github.com/apache/tvm/pull/11966#issuecomment-1173309231

   Hi @apeskov, what you mentioned is a common issue of blocked layout.
   > Let's take a close look on next example. Dense with next shapes: data_shape `[128, 10]`, weight_shape `[17, 10]`, weight_layout `NC`, output_shape will be `[128, 17]`. Assume that we applies alter op layout and change layout to `NC8c`. Weight shape will be changed to `[3, 10, 8]`, with some additional padding. Unexpectedly, output shape will also be changed to `[128, 24]`. Weight layout conversion changes output shape, that's very strange behavior.
   
   If additional `padding` applied when transformed from a plain layout to blocked layout, a `cropping` must be applied too when transformed back to plain layout. A bijective transformation should ensure `origin = backward(forward(origin))`, but it's not guarantied so far.
   
   Padding is natural, while `cropping` needs `extra information`. We use the extra information from the definition of `Conv` to solve this problem with the blocked weights, but it's a workaround instead of a general solution. I think we need both `logical shape` and `concrete shape` for tensor, just like the `dims` and `padded dims` in DNNL memory descriptor.
   
   Maybe we need a Pass to Infer the original logical shapes and save them as attributes for later usage, do you have any idea about this?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org