You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@tvm.apache.org by GitBox <gi...@apache.org> on 2020/01/18 03:58:58 UTC

[GitHub] [incubator-tvm] masahi opened a new issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?

masahi opened a new issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?
URL: https://github.com/apache/incubator-tvm/issues/4739
 
 
   Hi, when I run the following script I get an IR at the bottom which looks suboptimal to me. I expected to be left with only conv2d + bias add + relu. Am I missing something? @tqchen @vinx13 
   ```
   def test_fold_bn():
       def get_layers(prefix, data):
           weight = relay.var(prefix+"weight")
           bn_gamma = relay.var(prefix+"bn_gamma")
           bn_beta = relay.var(prefix+"bn_beta")
           bn_mmean = relay.var(prefix+"bn_mean")
           bn_mvar = relay.var(prefix+"bn_var")
   
           layer = relay.nn.conv2d(data=data, weight=weight,
                                   kernel_size=(3,3), channels=16, padding=(1, 1))
           layer = relay.nn.batch_norm(layer, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]
           layer = relay.nn.relu(layer)
           return layer
   
       data = relay.var("data", relay.TensorType((1, 3, 224, 224), "float32"))
       layer1 = get_layers("layer1_", data)
       last = layer1
       net = relay.Function(relay.analysis.free_vars(last), last)
   
       ishape = (1, 3, 224, 224)
       mod, params = tvm.relay.testing.create_workload(net)
       with relay.build_config(opt_level=3, disabled_pass=["AlterOpLayout"]):
           opt_mod, params = relay.build_module.optimize(mod, "llvm")
           print(opt_mod["main"].astext())
   ```
   
   ```
   fn (%data: Tensor[(1, 3, 224, 224), float32], %layer1_weight: Tensor[(16, 3, 3, 3), float32], %layer1_bn_gamma: Tensor[(16), float32], %layer1_bn_beta: Tensor[(16), float32], %layer1_bn_mean: Tensor[(16), float32], %layer1_bn_var: Tensor[(16), float32]) -> Tensor[(1, 16, 224, 224), float32] {
     %3 = fn (%p0: Tensor[(16), float32], %p1: Tensor[(16), float32], Primitive=1) -> Tensor[(16), float32] {
       %0 = add(%p0, 1e-05f /* ty=float32 */) /* ty=Tensor[(16), float32] */;
       %1 = sqrt(%0) /* ty=Tensor[(16), float32] */;
       %2 = divide(1f /* ty=float32 */, %1) /* ty=Tensor[(16), float32] */;
       multiply(%2, %p1) /* ty=Tensor[(16), float32] */
     };
     %4 = %3(%layer1_bn_var, %layer1_bn_gamma) /* ty=Tensor[(16), float32] */;
     %8 = fn (%p01: Tensor[(16, 3, 3, 3), float32], %p11: Tensor[(16), float32], Primitive=1) -> Tensor[(16, 3, 3, 3), float32] {
       %5 = expand_dims(%p11, axis=1, num_newaxis=2) /* ty=Tensor[(16, 1, 1), float32] */;
       %6 = squeeze(%5, axis=[1, 2]) /* ty=Tensor[(16), float32] */;
       %7 = expand_dims(%6, axis=1, num_newaxis=3) /* ty=Tensor[(16, 1, 1, 1), float32] */;
       multiply(%p01, %7) /* ty=Tensor[(16, 3, 3, 3), float32] */
     };
     %9 = %8(%layer1_weight, %4) /* ty=Tensor[(16, 3, 3, 3), float32] */;
     %16 = fn (%p02: Tensor[(1, 3, 224, 224), float32], %p12: Tensor[(16, 3, 3, 3), float32], %p2: Tensor[(16), float32], %p3: Tensor[(16), float32], %p4: Tensor[(16), float32], Primitive=1) -> Tensor[(1, 16, 224, 224), float32] {
       %10 = nn.conv2d(%p02, %p12, padding=[1, 1], channels=16, kernel_size=[3, 3]) /* ty=Tensor[(1, 16, 224, 224), float32] */;
       %11 = negative(%p2) /* ty=Tensor[(16), float32] */;
       %12 = multiply(%11, %p3) /* ty=Tensor[(16), float32] */;
       %13 = add(%12, %p4) /* ty=Tensor[(16), float32] */;
       %14 = expand_dims(%13, axis=1, num_newaxis=2) /* ty=Tensor[(16, 1, 1), float32] */;
       %15 = add(%10, %14) /* ty=Tensor[(1, 16, 224, 224), float32] */;
       nn.relu(%15) /* ty=Tensor[(1, 16, 224, 224), float32] */
     };
     %16(%data, %9, %layer1_bn_mean, %4, %layer1_bn_beta) /* ty=Tensor[(1, 16, 224, 224), float32] */
   }
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi commented on issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?

Posted by GitBox <gi...@apache.org>.
masahi commented on issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?
URL: https://github.com/apache/incubator-tvm/issues/4739#issuecomment-575864070
 
 
   ah, I see. Sorry for the noise.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] vinx13 commented on issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?

Posted by GitBox <gi...@apache.org>.
vinx13 commented on issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?
URL: https://github.com/apache/incubator-tvm/issues/4739#issuecomment-575864022
 
 
   In your example, params of bn are variable, neither FoldScaleAxis and FoldConstant is applicable 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [incubator-tvm] masahi closed issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?

Posted by GitBox <gi...@apache.org>.
masahi closed issue #4739: FoldScaleAxis + FoldConstant suboptimal on Conv + BN + Relu?
URL: https://github.com/apache/incubator-tvm/issues/4739
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services