You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/04/07 23:00:32 UTC

[GitHub] [incubator-tvm] masahi edited a comment on issue #5261: [RELAY][BYOC] Add support for composite functions in BYOC

masahi edited a comment on issue #5261: [RELAY][BYOC] Add support for composite functions in BYOC
URL: https://github.com/apache/incubator-tvm/pull/5261#issuecomment-610659753
 
 
   Using MergeComposite, AnnotateTarget and PartitionGraph, I get the following graph for conv + bias + relu pattern:
   
   ```
   def @dnnl_0(%dnnl_0_i0: Tensor[(1, 3, 224, 224), float32], Inline=1, Compiler="dnnl", global_symbol=runtime.String(0x55a8d9cddbd0), Primitive=1) -> Tensor[(1, 1, 224, 224), float32] {
     %2 = fn (%data: Tensor[(1, 3, 224, 224), float32], %weight: Tensor[(1, 3, 3, 3), float32], %bias: Tensor[(1, 1, 1), float32], Composite="dnnl.conv_bias_relu") -> Tensor[(1, 1, 224, 224), float32] {
       %0 = nn.conv2d(%data, %weight, padding=[1, 1, 1, 1], channels=1, kernel_size=[3, 3]) /* ty=Tensor[(1, 1, 224, 224), float32] */;
       %1 = add(%0, %bias) /* ty=Tensor[(1, 1, 224, 224), float32] */;
       nn.relu(%1) /* ty=Tensor[(1, 1, 224, 224), float32] */
     };
     %2(%dnnl_0_i0, meta[relay.Constant][0] /* ty=Tensor[(1, 3, 3, 3), float32] */ /* ty=Tensor[(1, 3, 3, 3), float32] */, meta[relay.Constant][1] /* ty=Tensor[(1, 1, 1), float32] */ /* ty=Tensor[(1, 1, 1), float32] */) /* ty=Tensor[(1, 1, 224, 224), float32] */
   }
   
   def @main(%data1: Tensor[(1, 3, 224, 224), float32]) -> Tensor[(1, 1, 224, 224), float32] {
     @dnnl_0(%data1) /* ty=Tensor[(1, 1, 224, 224), float32] */
   }
   ```
   
   Is it possible to inline composite function `%2` there into `dnnl_0`? What I want is this:
   ```
   def @dnnl_0(%dnnl_0_i0: Tensor[(1, 3, 224, 224), float32], Inline=1, Compiler="dnnl", global_symbol=runtime.String(0x5599b307c370), Primitive=1) -> Tensor[(1, 1, 224, 224), float32] {
     %0 = nn.conv2d(%dnnl_0_i0, meta[relay.Constant][0] /* ty=Tensor[(1, 3, 3, 3), float32] */ /* ty=Tensor[(1, 3, 3, 3), float32] */, padding=[1, 1, 1, 1], channels=1, kernel_size=[3, 3]) /* ty=Tensor[(1, 1, 224, 224), float32] */;
     %1 = add(%0, meta[relay.Constant][1] /* ty=Tensor[(1, 1, 1), float32] */ /* ty=Tensor[(1, 1, 1), float32] */) /* ty=Tensor[(1, 1, 224, 224), float32] */;
     nn.relu(%1) /* ty=Tensor[(1, 1, 224, 224), float32] */
   }
   
   def @main(%data: Tensor[(1, 3, 224, 224), float32]) -> Tensor[(1, 1, 224, 224), float32] {
     @dnnl_0(%data) /* ty=Tensor[(1, 1, 224, 224), float32] */
   }
   ```
   
   Otherwise I have to support function call in DNNL codegen. @zhiics @mbaret 
   
   UPDATE: hmm if I inline the composite function, I lose the composite attribute and hence cannot detect fused call. Is supporting functional call in DNNL a better option?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services